LFS システムを構築した皆さんであれば、ソフトウェアのダウンロードと伸張 (解凍) の方法は既にご存知のはずです。 しかしここでは、ソフトウェア構築に不慣れな方に向けてそういった情報も何度か説明することにします。
インストール説明を行っている個々のページでは、パッケージのダウンロード先 URL を示しています。 The patches; however, are stored on the LFS servers and are available via HTTP. These are referenced as needed in the installation instructions.
While you can keep the source files anywhere you like, we assume that you have unpacked the package and changed into the directory created by the unpacking process (the source directory). We also assume you have uncompressed any required patches and they are in the directory immediately above the source directory.
特に明確には述べていませんが、パッケージビルド時は きれいなソースツリー にて作業を進めてください。 configure
処理中やコンパイル中にエラーが発生した場合は、もう一度ビルド作業を進めるなら、いったんソースツリーを削除した上で、パッケージソースの伸張(解凍)からやり直すのが適切なやり方です。
もちろんあなたが独自の Makefile
なり C
コードなりを用いているような熟練ユーザーであれば話は別ですが、自信がない場合は全くの新しいソースツリーから作業を始めることにしてください。
Unix システム管理における鉄則は、スーパーユーザーによる操作は必要な時にのみ行うということです。 そこで BLFS
でも、ソフトウェアをビルドする際には一般ユーザーにて行い、インストール時のみ root
ユーザーとなって作業することとしています。
本書中では、どのパッケージであってもこのやり方で進めます。
特別に指定されていない限りは、すべての手順を一般ユーザーにて実施していきます。 必要な時には root
権限にて作業を進めるべきであることも説明します。
ファイルが .tar
形式でかつ圧縮されている場合は、以下のいずれかのコマンドにより伸張 (解凍) することができます。
tar -xvf filename.tar.gz tar -xvf filename.tgz tar -xvf filename.tar.Z tar -xvf filename.tar.bz2
上に示すコマンドや、これ以降に示すコマンドにおいても v
パラメーターはつけなくても構いません。 これをつけないようにすれば、アーカイブから抽出されるファイル一覧の表示が省略されます。
抽出処理時間が短縮されて、抽出中にエラーが発生した場合には判別しやすくなります。
あるいは以下のようなやり方もあります。
bzcat filename.tar.bz2 | tar -xv
Finally, sometimes we have a compressed patch file in .patch.gz
or .patch.bz2
format. The best way to apply the
patch is piping the output of the decompressor to the patch utility. For example:
gzip -cd ../patchname.patch.gz | patch -p1
Or for a patch compressed with bzip2:
bzcat ../patchname.patch.bz2 | patch -p1
Generally, to verify that the downloaded file is complete, many
package maintainers also distribute md5sums of the files. To verify
the md5sum of the downloaded files, download both the file and the
corresponding md5sum file to the same directory (preferably from
different on-line locations), and (assuming file.md5sum
is the md5sum file downloaded) run
the following command:
md5sum -c file.md5sum
If there are any errors, they will be reported. Note that the BLFS
book includes md5sums for all the source files also. To use the
BLFS supplied md5sums, you can create a file.md5sum
(place the md5sum data and the exact
name of the downloaded file on the same line of a file, separated
by white space) and run the command shown above. Alternately,
simply run the command shown below and compare the output to the
md5sum data shown in the BLFS book.
md5sum <name_of_downloaded_file>
MD5 is not cryptographically secure, so the md5sums are only provided for detecting unmalicious changes to the file content. For example, an error or truncation introduced during network transfer, or a 「stealth」 update to the package from the upstream (updating the content of a released tarball instead of making a new release properly).
There is no 「100%」 secure way to make sure the genuity of the source files. Assuming the upstream is managing their website correctly (the private key is not leaked and the domain is not hijacked), and the trust anchors have been set up correctly using make-ca-1.13 on the BLFS system, we can reasonably trust download URLs to the upstream official website with https protocol. Note that BLFS book itself is published on a website with https, so you should already have some confidence in https protocol or you wouldn't trust the book content.
If the package is downloaded from an unofficial location (for example a local mirror), checksums generated by cryptographically secure digest algorithms (for example SHA256) can be used to verify the genuity of the package. Download the checksum file from the upstream official website (or somewhere you can trust) and compare the checksum of the package from unofficial location with it. For example, SHA256 checksum can be checked with the command:
If the checksum and the package are downloaded from the same untrusted location, you won't gain security enhancement by verifying the package with the checksum. The attacker can fake the checksum as well as compromising the package itself.
sha256sum -c file
.sha256sum
If GnuPG-2.4.3 is installed, you can also verify the genuity of the package with a GPG signature. Import the upstream GPG public key with:
gpg --recv-key keyID
keyID
should be replaced
with the key ID from somewhere you can
trust (for example, copy it from the upstream
official website using https). Now you can verify the signature
with:
gpg --recv-keyfile
.sigfile
The advantage of GnuPG signature is, once you imported a public key which can be trusted, you can download both the package and its signature from the same unofficial location and verify them with the public key. So you won't need to connect to the official upstream website to retrieve a checksum for each new release. You only need to update the public key if it's expired or revoked.
For larger packages, it is convenient to create log files instead
of staring at the screen hoping to catch a particular error or
warning. Log files are also useful for debugging and keeping
records. The following command allows you to create an installation
log. Replace <command>
with the command
you intend to execute.
( <command>
2>&1 | tee compile.log && exit $PIPESTATUS )
2>&1
redirects error messages to
the same location as standard output. The tee command allows viewing of the
output while logging the results to a file. The parentheses around
the command run the entire command in a subshell and finally the
exit $PIPESTATUS
command ensures the result of the <command>
is returned as the
result and not the result of the tee command.
For many modern systems with multiple processors (or cores) the compilation time for a package can be reduced by performing a "parallel make" by either setting an environment variable or telling the make program to simultaneously execute multiple jobs.
For instance, an Intel Core i9-13900K CPU contains 8 performance (P) cores and 16 efficiency (E) cores, and the P cores support SMT (Simultaneous MultiThreading, also known as 「Hyper-Threading」) so each P core can run two threads simultaneously and the Linux kernel will treat each P core as two logical cores. As the result, there are 32 logical cores in total. To utilize all these logical cores running make, we can set an environment variable to tell make to run 32 jobs simultaneously:
export MAKEFLAGS='-j32'
or just building with:
make -j32
If you have applied the optional sed when building ninja in LFS, you can use:
export NINJAJOBS=32
when a package uses ninja, or just:
ninja -j32
If you are not sure about the number of logical cores, run the nproc command.
For make, the default number of jobs is 1. But for ninja, the default number of jobs is N + 2 if the number of logical cores N is greater than 2; or N + 1 if N is 1 or 2. The reason to use a number of jobs slightly greater than the number of logical cores is keeping all logical processors busy even if some jobs are performing I/O operations.
Note that the -j
switches only limits
the parallel jobs started by make or ninja, but each job may still
spawn its own processes or threads. For example, ld.gold will use multiple threads
for linking, and some tests of packages can spawn multiple threads
for testing thread safety properties. There is no generic way for
the building system to know the number of processes or threads
spawned by a job. So generally we should not consider the value
passed with -j
a hard limit of the
number of logical cores to use. Read 「Use Linux
Control Group to Limit the Resource Usage」 if you want to set
such a hard limit.
Generally the number of processes should not exceed the number of
cores supported by the CPU too much. To list the processors on your
system, issue: grep processor
/proc/cpuinfo
.
In some cases, using multiple processes may result in a race condition where the success of the build depends on the order of the commands run by the make program. For instance, if an executable needs File A and File B, attempting to link the program before one of the dependent components is available will result in a failure. This condition usually arises because the upstream developer has not properly designated all the prerequisites needed to accomplish a step in the Makefile.
If this occurs, the best way to proceed is to drop back to a single
processor build. Adding -j1
to a make
command will override the similar setting in the MAKEFLAGS
environment variable.
Another problem may occur with modern CPU's, which have a lot of cores. Each job started consumes memory, and if the sum of the needed memory for each job exceeds the available memory, you may encounter either an OOM (Out of Memory) kernel interrupt or intense swapping that will slow the build beyond reasonable limits.
Some compilations with g++ may consume up to 2.5 GB of memory, so to be safe, you should restrict the number of jobs to (Total Memory in GB)/2.5, at least for big packages such as LLVM, WebKitGtk, QtWebEngine, or libreoffice.
Sometimes we want to limit the resource usage when we build a package. For example, when we have 8 logical cores, we may want to use only 6 cores for building the package and reserve another 2 cores for playing a movie. The Linux kernel provides a feature called control groups (cgroup) for such a need.
Enable control group in the kernel configuration, then rebuild the kernel and reboot if necessary:
General setup ---> [*] Control Group support ---> [CGROUPS] [*] Memory controller [MEMCG] [*] Cpuset controller [CPUSETS]
Ensure Systemd-255 and Shadow-4.14.2
have been rebuilt with Linux-PAM-1.5.3 support (if you are
interacting via a SSH or graphical session, also ensure the
OpenSSH-9.5p1 server or the desktop manager has
been built with Linux-PAM-1.5.3). As the root
user, create a configuration file to allow
resource control without root
privilege, and instruct systemd to reload the
configuration:
mkdir -pv /etc/systemd/system/user@.service.d &&
cat > /etc/systemd/system/user@.service.d/delegate.conf << EOF &&
[Service]
Delegate=memory cpuset
systemctl daemon-reload
Then logout and login again. Now to run make -j5 with the first 4 logical cores and 8 GB of system memory, issue:
systemctl --user start dbus && systemd-run --user --pty --pipe --wait -G -d \ -p MemoryHigh=8G \ -p AllowedCPUs=0-3 \ make -j5
With MemoryHigh=8G
, a soft limit
of memory usage is set. If the processes in the cgroup
(make and all the
descendants of it) uses more than 8 GB of system memory in total,
the kernel will throttle down the processes and try to reclaim the
system memory from them. But they can still use more than 8 GB of
system memory. If you want to make a hard limit instead, replace
MemoryHigh
with MemoryMax
. But doing so will
cause the processes killed if 8 GB is not enough for them.
AllowedCPUs=0-3
makes the
kernel only run the processes in the cgroup on the logical cores
with numbers 0, 1, 2, or 3. You may need to adjust this setting
based the mapping between the logical cores and the physical cores.
For example, with an Intel Core i9-13900K CPU, the logical cores 0,
2, 4, ..., 14 are mapped to the first threads of the eight physical
P cores, the logical cores 1, 3, 5, ..., 15 are mapped to the
second threads of the physical P cores, and the logical cores 16,
17, ..., 31 are mapped to the 16 physical E cores. So if we want to
use four threads from four different P cores, we need to specify
0,2,4,6
instead of 0-3
. Note that the other CPU models may use a
different mapping scheme. If you are not sure about the mapping
between the logical cores and the physical cores, run grep -E '^processor|^core'
/proc/cpuinfo which will output logical core IDs in
the processor
lines, and
physical core IDs in the core
id
lines.
When the nproc or
ninja command runs in
a cgroup, it will use the number of logical cores assigned to the
cgroup as the 「system
logical core count」. For example, in a cgroup with
logical cores 0-3 assigned, nproc will print 4
, and ninja will run 6 (4 + 2) jobs
simultaneously if no -j
setting is
explicitly given.
Read the man pages systemd-run(1)
and
systemd.resource-control(5)
for the
detailed explanation of parameters in the command.
There are times when automating the building of a package can come
in handy. Everyone has their own reasons for wanting to automate
building, and everyone goes about it in their own way. Creating
Makefile
s, Bash scripts, Perl scripts or simply a list of commands used
to cut and paste are just some of the methods you can use to
automate building BLFS packages. Detailing how and providing
examples of the many ways you can automate the building of packages
is beyond the scope of this section. This section will expose you
to using file redirection and the yes command to help provide ideas
on how to automate your builds.
You will find times throughout your BLFS journey when you will come across a package that has a command prompting you for information. This information might be configuration details, a directory path, or a response to a license agreement. This can present a challenge to automate the building of that package. Occasionally, you will be prompted for different information in a series of questions. One method to automate this type of scenario requires putting the desired responses in a file and using redirection so that the program uses the data in the file as the answers to the questions.
This effectively makes the test suite use the responses in the file as the input to the questions. Occasionally you may end up doing a bit of trial and error determining the exact format of your input file for some things, but once figured out and documented you can use this to automate building the package.
入力プロンプトに対して決まった内容を入力したり、それが複数回あってもすべて同一の答えを入力するような場合があります。 そういった時は yes コマンドを利用すると便利です。 yes コマンドは、何度かある問合せ入力に対して同一の答えを入力するものです。 入力内容として、単に Enter キーを入力する、Y キーを入力する、所定の文字列を入力する、といったことが可能です。 単純な利用例を以下に示します。
初めに以下のコマンドを実行して Bash スクリプトを生成します。
cat > blfs-yes-test1 << "EOF"
#!/bin/bash
echo -n -e "\n\nPlease type something (or nothing) and press Enter ---> "
read A_STRING
if test "$A_STRING" = ""; then A_STRING="Just the Enter key was pressed"
else A_STRING="You entered '$A_STRING'"
fi
echo -e "\n\n$A_STRING\n\n"
EOF
chmod 755 blfs-yes-test1
まずはこのスクリプト ./blfs-yes-test1 をコマンドラインから実行してみます。 入力が促されて処理が止まるので、何かを入力して (あるいは何も入力せずに) Enter キーを入力します。 入力した内容は画面上に表示されます。 さてそこで yes コマンドを用い、入力を自動化することにします。
yes | ./blfs-yes-test1
この場合、yes のパイプ処理を通じて、スクリプトに対しては y が入力されたものとして受け渡されます。 以下は特定の文字列を受け渡すような例です。
yes 'This is some text' | ./blfs-yes-test1
The exact string was used as the response to the script. Finally, try it using an empty (null) string:
yes '' | ./blfs-yes-test1
Notice this results in passing just the press of the Enter key to the script. This is useful for times when the default answer to the prompt is sufficient. This syntax is used in the Net-tools instructions to accept all the defaults to the many prompts during the configuration step. You may now remove the test script, if desired.
In order to automate the building of some packages, especially those that require you to read a license agreement one page at a time, requires using a method that avoids having to press a key to display each page. Redirecting the output to a file can be used in these instances to assist with the automation. The previous section on this page touched on creating log files of the build output. The redirection method shown there used the tee command to redirect output to a file while also displaying the output to the screen. Here, the output will only be sent to a file.
Again, the easiest way to demonstrate the technique is to show an example. First, issue the command:
ls -l /usr/bin | less
Of course, you'll be required to view the output one page at a time
because the less
filter was used. Now try the same command, but this time redirect
the output to a file. The special file /dev/null
can be used instead of the filename
shown, but you will have no log file to examine:
ls -l /usr/bin | less > redirect_test.log 2>&1
Notice that this time the command immediately returned to the shell prompt without having to page through the output. You may now remove the log file.
The last example will use the yes command in combination with output redirection to bypass having to page through the output and then provide a y to a prompt. This technique could be used in instances when otherwise you would have to page through the output of a file (such as a license agreement) and then answer the question of 「do you accept the above?」. For this example, another short Bash script is required:
cat > blfs-yes-test2 << "EOF"
#!/bin/bash
ls -l /usr/bin | less
echo -n -e "\n\nDid you enjoy reading this? (y,n) "
read A_STRING
if test "$A_STRING" = "y"; then A_STRING="You entered the 'y' key"
else A_STRING="You did NOT enter the 'y' key"
fi
echo -e "\n\n$A_STRING\n\n"
EOF
chmod 755 blfs-yes-test2
This script can be used to simulate a program that requires you to read a license agreement, then respond appropriately to accept the agreement before the program will install anything. First, run the script without any automation techniques by issuing ./blfs-yes-test2.
Now issue the following command which uses two automation techniques, making it suitable for use in an automated build script:
yes | ./blfs-yes-test2 > blfs-yes-test2.log 2>&1
If desired, issue tail blfs-yes-test2.log to see the end of the paged output, and confirmation that y was passed through to the script. Once satisfied that it works as it should, you may remove the script and log file.
Finally, keep in mind that there are many ways to automate and/or script the build commands. There is not a single 「correct」 way to do it. Your imagination is the only limit.
本書が示す各パッケージの説明においては、依存するパッケージを一覧表示しています。 その一覧では以下に示すような項目分けを行っています。
必須 は、パッケージを初めてインストールする際には、依存しているそれらのパッケージがない状態では正しくビルドすることができないことを表します。 except if the dependency is said to be 「runtime」, which means the target package can be built but cannot function without it.
Note that a target package can start to 「function」 in many subtle ways: an installed configuration file can make the init system, cron daemon, or bus daemon to run a program automatically; another package using the target package as an dependency can run a program from the target package in the building system; and the configuration sections in the BLFS book may also run a program from a just installed package. So if you are installing the target package without a Required (runtime) dependency installed, You should install the dependency as soon as possible after the installation of the target package.
Recommended means that BLFS strongly suggests this package is installed first (except if said to be 「runtime」, see below) for a clean and trouble-free build, that won't have issues either during the build process, or at run-time. The instructions in the book assume these packages are installed. Some changes or workarounds may be required if these packages are not installed. If a recommended dependency is said to be 「runtime」, it means that BLFS strongly suggests that this dependency is installed before using the package, for getting full functionality.
任意 は、付加的な機能を実現するためにはそのパッケージが必要であることを表します。 Often BLFS will describe the dependency to explain the added functionality that will result. An optional dependency may be automatically pick up by the target package if the dependency is installed, but another some optional dependency may also need additional configuration options to enable them when the target package is built. Such additional options are often documented in the BLFS book. If an optional dependency is said to be 「runtime」, it means you may install the dependency after installing the target package to support some optional features of the target package if you need these features.
An optional dependency may be out of BLFS. If you need such an external optional dependency for some features you need, read BLFS のその先 for the general hint about installing an out-of-BLFS package.
本書内のパッケージをビルドしようとした際に、ビルド出来なかったり正常に動作しなかったりすることが発生するかもしれません。 本書の編集者は、各パッケージが正常にビルド出来、正常に動作するように常に確認を行っています。 しかしパッケージの確認に見落としがあったり、BLFS の特定バージョンでのテスト確認が不十分であったりするものもあります。
パッケージのビルド不備や動作不備に気づいた方は、そのパッケージのより新しいバージョンが存在しているかどうかを確認してください。 これはつまり、パッケージ管理サイトを参照して最新バージョンを入手し、そのバージョンでのビルドを試して頂きたいのです。 パッケージのダウンロード URL だけでは管理サイトが見つけられなかった場合は、Google を利用してパッケージ名を検索してください。 例えば Google にて 'パッケージ名 download' (引用符は除きます) といった検索語を入力してみてください。 あるいは 'パッケージ名 home page' と入力するのが良いかもしれません。 そうすることでパッケージ管理サイトが見つけ出せるはずです。
In LFS, stripping of debugging symbols and unneeded symbol table entries was discussed a couple of times. When building BLFS packages, there are generally no special instructions that discuss stripping again. Stripping can be done while installing a package, or afterwards.
There are several ways to strip executables installed by a package. They depend on the build system used (see below the section about build systems), so only some generalities can be listed here:
The following methods using the feature of a building system (autotools, meson, or cmake) will not strip static libraries if any is installed. Fortunately there are not too many static libraries in BLFS, and a static library can always be stripped safely by running strip --strip-unneeded on it manually.
The packages using autotools usually have an install-strip
target in their
generated Makefile
files. So
installing stripped executables is just a matter of using
make
install-strip instead of make install.
The packages using the meson build system can accept
-Dstrip=true
when
running meson.
If you've forgot to add this option running the meson, you can also run
meson install
--strip instead of ninja install.
cmake generates
install/strip
targets
for both the Unix
Makefiles
and Ninja
generators (the default
is Unix Makefiles
on
linux). So just run make
install/strip or ninja install/strip instead
of the install
counterparts.
Removing (or not generating) debug symbols can also be
achieved by removing the -g<something>
options in
C/C++ calls. How to do that is very specific for each
package. And, it does not remove unneeded symbol table
entries. So it will not be explained in detail here. See also
below the paragraphs about optimization.
The strip utility
changes files in place, which may break anything using it if it is
loaded in memory. Note that if a file is in use but just removed
from the disk (i.e. not overwritten nor modified), this is not a
problem since the kernel can use 「deleted」 files. Look at /proc/*/maps
and it is likely that you'll see
some (deleted) entries. The
mv just removes the
destination file from the directory but does not touch its content,
so that it satisfies the condition for the kernel to use the old
(deleted) file. But this approach can detach hard links into
duplicated copies, causing a bloat which is obviously unwanted as
we are stripping to reduce system size. If two files in a same file
system share the same inode number, they are hard links to each
other and we should reconstruct the link. The script below is just
an example. It should be run as the root
user:
cat > /usr/sbin/strip-all.sh << "EOF"
#!/usr/bin/bash
if [ $EUID -ne 0 ]; then
echo "Need to be root"
exit 1
fi
last_fs_inode=
last_file=
{ find /usr/lib -type f -name '*.so*' ! -name '*dbg'
find /usr/lib -type f -name '*.a'
find /usr/{bin,sbin,libexec} -type f
} | xargs stat -c '%m %i %n' | sort | while read fs inode file; do
if ! readelf -h $file >/dev/null 2>&1; then continue; fi
if file $file | grep --quiet --invert-match 'not stripped'; then continue; fi
if [ "$fs $inode" = "$last_fs_inode" ]; then
ln -f $last_file $file;
continue;
fi
cp --preserve $file ${file}.tmp
strip --strip-unneeded ${file}.tmp
mv ${file}.tmp $file
last_fs_inode="$fs $inode"
last_file=$file
done
EOF
chmod 744 /usr/sbin/strip-all.sh
If you install programs in other directories such as /opt
or /usr/local
,
you may want to strip the files there too. Just add other
directories to scan in the compound list of find commands between the braces.
For more information on stripping, see https://www.technovelty.org/linux/stripping-shared-libraries.html.
There are now three different build systems in common use for
converting C or C++ source code into compiled programs or libraries
and their details (particularly, finding out about available
options and their default values) differ. It may be easiest to
understand the issues caused by some choices (typically slow
execution or unexpected use of, or omission of, optimizations) by
starting with the CFLAGS
, CXXFLAGS
, and LDFLAGS
environment variables. There are also some programs which use Rust.
Most LFS and BLFS builders are probably aware of the basics of
CFLAGS
and CXXFLAGS
for altering how a program is compiled.
Typically, some form of optimization is used by upstream developers
(-O2
or -O3
), sometimes with the creation of debug symbols
(-g
), as defaults.
If there are contradictory flags (e.g. multiple different
-O
values), the last value will be used. Sometimes this
means that flags specified in environment variables will be picked
up before values hardcoded in the Makefile, and therefore ignored.
For example, where a user specifies -O2
and that is followed by -O3
the build
will use -O3
.
There are various other things which can be passed in CFLAGS or
CXXFLAGS, such as allowing using the instruction set extensions
available with a specific microarchitecture (e.g. -march=amdfam10
or -march=native
), tune the generated code for a
specific microarchitecture (e. g. -mtune=tigerlake
or -mtune=native
, if -mtune=
is not used, the microarchitecture from
-march=
setting will be used), or
specifying a specific standard for C or C++ (-std=c++17
for example). But one thing which has
now come to light is that programmers might include debug
assertions in their code, expecting them to be disabled in releases
by using -DNDEBUG
. Specifically, if
Mesa-23.3.1 is built with these assertions
enabled, some activities such as loading levels of games can take
extremely long times, even on high-class video cards.
This combination is often described as 「CMMI」 (configure, make, make install) and is used here to also cover the few packages which have a configure script that is not generated by autotools.
Sometimes running ./configure --help will produce useful options about switches which might be used. At other times, after looking at the output from configure you may need to look at the details of the script to find out what it was actually searching for.
Many configure scripts will pick up any CFLAGS or CXXFLAGS from the environment, but CMMI packages vary about how these will be mixed with any flags which would otherwise be used (variously: ignored, used to replace the programmer's suggestion, used before the programmer's suggestion, or used after the programmer's suggestion).
In most CMMI packages, running make will list each command and
run it, interspersed with any warnings. But some packages try to be
「silent」 and
only show which file they are compiling or linking instead of
showing the command line. If you need to inspect the command,
either because of an error, or just to see what options and flags
are being used, adding V=1
to the make
invocation may help.
CMake works in a very different way, and it has two backends which
can be used on BLFS: make and ninja. The default backend is
make, but ninja can be faster on large packages with multiple
processors. To use ninja, specify -G
Ninja
in the cmake command. However, there are some packages
which create fatal errors in their ninja files but build
successfully using the default of Unix Makefiles.
The hardest part of using CMake is knowing what options you might wish to specify. The only way to get a list of what the package knows about is to run cmake -LAH and look at the output for that default configuration.
Perhaps the most-important thing about CMake is that it has a
variety of CMAKE_BUILD_TYPE values, and these affect the flags. The
default is that this is not set and no flags are generated. Any
CFLAGS
or CXXFLAGS
in the environment will be used. If the
programmer has coded any debug assertions, those will be enabled
unless -DNDEBUG is used. The following CMAKE_BUILD_TYPE values will
generate the flags shown, and these will come after any flags in the environment and
therefore take precedence.
Value | Flags |
---|---|
Debug |
-g
|
Release |
-O3 -DNDEBUG
|
RelWithDebInfo |
-O2 -g -DNDEBUG
|
MinSizeRel |
-Os -DNDEBUG
|
CMake tries to produce quiet builds. To see the details of the commands which are being run, use make VERBOSE=1 or ninja -v.
By default, CMake treats file installation differently from the
other build systems: if a file already exists and is not newer than
a file that would overwrite it, then the file is not installed.
This may be a problem if a user wants to record which file belongs
to a package, either using LD_PRELOAD
,
or by listing files newer than a timestamp. The default can be
changed by setting the variable CMAKE_INSTALL_ALWAYS
to 1 in the environment, for example by export'ing it.
Meson has some similarities to CMake, but many differences. To get
details of the defines that you may wish to change you can look at
meson_options.txt
which is usually in
the top-level directory.
If you have already configured the package by running meson and now wish to change one or more settings, you can either remove the build directory, recreate it, and use the altered options, or within the build directory run meson configure, e.g. to set an option:
meson configure -D<some_option>=true
If you do that, the file meson-private/cmd_line.txt
will show the
last commands which were
used.
Meson provides the following buildtype values, and the flags they enable come after any flags supplied in the environment and therefore take precedence.
plain : no added flags. This is for distributors to supply
their own CFLAGS
, CXXFLAGS
and LDFLAGS
. There is no obvious reason to use
this in BLFS.
debug : -g
- this is the default
if nothing is specified in either meson.build
or the command line. However it
results large and slow binaries, so we should override it in
BLFS.
debugoptimized : -O2 -g
- this is
the default specified in meson.build
of some packages.
release : -O3
(occasionally a
package will force -O2
here) -
this is the buildtype we use for most packages with Meson
build system in BLFS.
The -DNDEBUG
flag is implied by the
release buildtype for some packages (for example Mesa-23.3.1). It can
also be provided explicitly by passing -Db_ndebug=true
.
To see the details of the commands which are being run in a package using meson, use ninja -v.
Most released rustc programs are provided as crates (source
tarballs) which will query a server to check current versions of
dependencies and then download them as necessary. These packages
are built using cargo
--release. In theory, you can manipulate the
RUSTFLAGS to change the optimize-level (default for --release
is 3, i. e. -Copt-level=3
, like -O3
) or to force it to build for the machine it is
being compiled on, using -Ctarget-cpu=native
but in practice this seems to
make no significant difference.
If you are compiling a standalone Rust program (as an unpackaged
.rs
file) by running rustc directly, you should
specify -O
(the abbreviation of
-Copt-level=2
) or -Copt-level=3
otherwise it will do an unoptimized
compile and run much slower.
If are compiling the program for debugging it, replace the
-O
or -Copt-level=
options with -g
to produce an unoptimized program with debug
info.
Like ninja, by
default cargo uses
all logical cores. This can often be worked around, either by
exporting CARGO_BUILD_JOBS=
or passing
<N>
--jobs
to cargo. For compiling rustc
itself, specifying <N>
--jobs
for invocations of
x.py (together with
the <N>
CARGO_BUILD_JOBS
environment
variable, which looks like a 「belt and braces」 approach but seems to be
necessary) mostly works. The exception is running the tests when
building rustc, some of them will nevertheless use all online CPUs,
at least as of rustc-1.42.0.
Many people will prefer to optimize compiles as they see fit, by
providing CFLAGS
or CXXFLAGS
. For an introduction to the options
available with gcc and g++ see
https://gcc.gnu.org/onlinedocs/gcc-13.2.0/gcc/Optimize-Options.html.
The same content can be also found in info gcc.
Some packages default to -O2 -g
, others
to -O3 -g
, and if CFLAGS
or CXXFLAGS
are
supplied they might be added to the package's defaults, replace the
package's defaults, or even be ignored. There are details on some
desktop packages which were mostly current in April 2019 at
https://www.linuxfromscratch.org/~ken/tuning/
- in particular, README.txt
,
tuning-1-packages-and-notes.txt
, and
tuning-notes-2B.txt
. The particular
thing to remember is that if you want to try some of the more
interesting flags you may need to force verbose builds to confirm
what is being used.
Clearly, if you are optimizing your own program you can spend time
to profile it and perhaps recode some of it if it is too slow. But
for building a whole system that approach is impractical. In
general, -O3
usually produces faster
programs than -O2
. Specifying
-march=native
is also beneficial, but
means that you cannot move the binaries to an incompatible machine
- this can also apply to newer machines, not just to older
machines. For example programs compiled for amdfam10
run on old Phenoms, Kaveris, and Ryzens :
but programs compiled for a Kaveri will not run on a Ryzen because
certain op-codes are not present. Similarly, if you build for a
Haswell not everything will run on a SandyBridge.
Be careful that the name of a -march
setting does not always match the baseline of the
microarchitecture with the same name. For example, the
Skylake-based Intel Celeron processors do not support AVX at all,
but -march=skylake
assumes AVX and
even AVX2.
When a shared library is built by GCC, a feature named 「semantic interposition」
is enabled by default. When the shared library refers to a symbol
name with external linkage and default visibility, if the symbol
exists in both the shared library and the main executable, semantic
interposition guarantees the symbol in the main executable is
always used. This feature was invented in an attempt to make the
behavior of linking a shared library and linking a static library
as similar as possible. Today only a small number of packages still
depend on semantic interposition, but the feature is still on by
the default of GCC, causing many optimizations disabled for shared
libraries because they conflict with semantic interposition. The
-fno-semantic-interposition
option can
be passed to gcc or
g++ to disable
semantic interposition and enable more optimizations for shared
libraries. This option is used as the default of some packages (for
example Python-3.12.1), and it's also the default of
Clang.
There are also various other options which some people claim are beneficial. At worst, you get to recompile and test, and then discover that in your usage the options do not provide a benefit.
If building Perl or Python modules, in general the CFLAGS
and CXXFLAGS
used
are those which were used by those 「parent」 packages.
For LDFLAGS
, there are three options can
be used for optimization. They are quite safe to use and the
building system of some packages use some of these options as the
default.
With -Wl,-O1
, the linker will optimize
the hash table to speed up the dynamic linking. Note that
-Wl,-O1
is completely unrelated to the
compiler optimization flag -O1
.
With -Wl,--as-needed
, the linker will
disregard unnecessary -l
options from the command
line, i. e. the shared library foo
lib
will only be linked if a
symbol in foo
lib
is really referred from
the executable or shared library being linked. This can sometimes
mitigate the 「excessive
dependencies to shared libraries」 issues caused by
libtool.
foo
With -Wl,-z,pack-relative-relocs
, the
linker generates a more compacted form of the relative relocation
entries for PIEs and shared libraries. It reduces the size of the
linked PIE or shared library, and speeds up the loading of the PIE
or shared library.
The -Wl,
prefix is necessary because
despite the variable is named LDFLAGS
,
its content is actually passed to gcc (or g++, clang, etc.) during the link
stage, not directly passed to ld.
Even on desktop systems, there are still a lot of exploitable vulnerabilities. For many of these, the attack comes via javascript in a browser. Often, a series of vulnerabilities are used to gain access to data (or sometimes to pwn, i.e. own, the machine and install rootkits). Most commercial distros will apply various hardening measures.
In the past, there was Hardened LFS where gcc (a much older
version) was forced to use hardening (with options to turn some of
it off on a per-package basis). The current LFS and BLFS books are
carrying forward a part of its spirit by enabling PIE (-fPIE -pie
) and SSP (-fstack-protector-strong
) as the defaults for GCC
and clang. What is being covered here is different - first you have
to make sure that the package is indeed using your added flags and
not over-riding them.
For hardening options which are reasonably cheap, there is some
discussion in the 'tuning' link above (occasionally, one or more of
these options might be inappropriate for a package). These options
are -D_FORTIFY_SOURCE=2
and (for C++)
-D_GLIBCXX_ASSERTIONS
. On modern
machines these should only have a little impact on how fast things
run, and often they will not be noticeable.
The main distros use much more, such as RELRO (Relocation Read
Only) and perhaps -fstack-clash-protection
. You may also encounter
the so-called 「userspace
retpoline」 (-mindirect-branch=thunk
etc.) which is the
equivalent of the spectre mitigations applied to the linux kernel
in late 2018. The kernel mitigations caused a lot of complaints
about lost performance, if you have a production server you might
wish to consider testing that, along with the other available
options, to see if performance is still sufficient.
Whilst gcc has many hardening options, clang/LLVM's strengths lie elsewhere. Some options which gcc provides are said to be less effective in clang/LLVM.