Kahibaro
Discord Login Register

4.3.4 Compiling a custom kernel

Why Compile a Custom Kernel

Compiling a custom kernel gives you control over what runs in your system at the deepest level. Instead of relying on a generic distribution kernel that supports a huge variety of hardware and features, you can build a kernel that is tailored to your machine and your needs. This can reduce boot time, improve performance in certain workloads, add support for very new hardware, or enable experimental or specialized features that your distribution does not ship yet.

A custom kernel is also a powerful learning tool. The process of configuring, building, and installing it exposes you to the structure of the kernel source tree, the role of configuration options, and the relationship between the kernel, its modules, and user space. Because kernel compilation affects core system stability, it is important to work methodically, keep backups, and always have a way to boot back into a known good kernel.

A custom kernel can render your system unbootable if misconfigured. Always keep at least one working distribution kernel installed and an accessible rescue method, for example a live USB or a virtual machine snapshot.

Preparing the Build Environment

Before you can build a kernel, you need the appropriate tools and enough disk space. Kernel sources are large, and compilation generates many intermediate files. You should plan for several gigabytes of free space in the filesystem where you will place the source tree, typically under /usr/src or in a directory in your home folder.

The kernel build process uses a standard set of development tools: a C compiler such as gcc or clang, make, binutils for linking and assembling, and a few additional utilities for configuration and compression. Your distribution usually provides these as development or build-essential packages. On a Debian or Ubuntu based system you would install them with a command such as:

sudo apt install build-essential libncurses-dev bison flex libssl-dev libelf-dev

On an RPM based system like Fedora you might use:

sudo dnf groupinstall "Development Tools"
sudo dnf install ncurses-devel bison flex elfutils-libelf-devel openssl-devel

The exact package names differ by distribution, but the pattern is the same: you install the compiler toolchain, the curses library for text user interfaces, and libraries for cryptography and ELF handling. If you plan to build kernels with additional compression formats, such as LZMA or XZ, the build system will also need the corresponding development libraries.

You should perform kernel compilation as a regular user, not as root. Only the installation steps that copy files into /boot and update the bootloader require elevated privileges. This separation reduces the risk that a mistake in the build directory damages the system.

Obtaining the Kernel Source

You can obtain kernel source from your distribution or directly from the mainline Linux kernel project. Distribution kernels often include patches and integration logic that match your system tools, while the mainline kernel from kernel.org represents the official upstream code.

To get a mainline kernel, visit the official kernel site and download a tar.xz or tar.gz archive for the version you want. Suppose you download linux-6.9.tar.xz into your home directory. You would extract it with:

tar xf linux-6.9.tar.xz
cd linux-6.9

This creates a directory that contains the kernel source tree. Inside it you will find files such as Makefile, Kconfig files, and many subdirectories like arch, drivers, and fs. You will run configuration and build commands from this top level directory.

If you prefer to follow your distribution’s kernel, your package manager can usually install it. For example, on Debian based systems there are packages like linux-source that place the source in /usr/src. These sources often match the configuration and patches that your current kernel uses. Using a distribution kernel source can simplify the first custom build, because you can start from a configuration that is already known to work on your machine.

Working with Kernel Configuration

The kernel configuration decides which features, subsystems, drivers, and debugging options are included. Each option typically can be compiled in, built as a loadable module, or disabled. The configuration is encoded in a file named .config in the root of the source tree.

A common approach is to start from the configuration of your currently running kernel. Distributions store this configuration in /boot, usually named something like config-$(uname -r). You can copy it to the source directory with:

cp /boot/config-$(uname -r) .config

This gives you a known good baseline. Newer kernels introduce new options that your old configuration does not specify, so before building you should update it. The kernel build system provides helpers to do that.

The simplest maintenance tool is oldconfig. In the source directory you run:

make oldconfig

This reads your existing .config, keeps all known settings, and then prompts you for each new option one by one. For many users this is more detail than they want for a first build, so other interfaces can be easier.

The menuconfig interface provides a text based menu to browse and modify options. It requires curses development libraries. You invoke it with:

make menuconfig

A hierarchical menu appears in your terminal. Categories correspond to parts of the kernel, for example processor type, device drivers, file systems, and networking. Inside each menu you can enable or disable features. Options are usually marked as built in, module, or off, often with notations like [ * ], [ M ], or [ ]. Toolbar help explains each option, its dependencies, and possible effects.

Graphical interfaces exist as well, such as xconfig and gconfig, which require the relevant GUI toolkit libraries. The underlying configuration system is the same in all cases, only the way you edit .config changes.

Never remove support for the root filesystem type of your current system, for example EXT4 or XFS, from the kernel that must mount /. Also do not disable the basic storage controller driver required to access the disk that holds your root filesystem. If these are not built in, the kernel will fail to mount the root filesystem and the system will not boot.

At this stage you can also customize the Local version string in the General setup section. This is appended to the kernel version and helps distinguish your custom kernel from distribution kernels. For instance, you might set it to -custom1, which will yield a kernel release like 6.9.0-custom1.

When you save and exit menuconfig or any other configuration tool, it writes the .config file. This configuration fully determines what the build system will compile.

Building the Kernel and Modules

Once your configuration is ready, you can compile the kernel image and its modules. Before a first build in a new source tree, it is often recommended to run:

make clean

or for a more thorough cleanup:

make mrproper

mrproper removes some additional files including any existing .config, so you should only run it before establishing your configuration or after you have backed it up.

To compile, you usually just invoke:

make

The build system determines the number of parallel jobs, but you can explicitly speed up compilation on multi core systems by specifying a -j value, for example:

make -j$(nproc)

This uses all available CPU cores. On less powerful systems, kernel compilation can take significant time. During the build, the system compiles the main kernel image and each selected driver or feature, then links them into a final executable image in the arch subdirectory that matches your architecture, for example arch/x86/boot/bzImage for 64 bit x86.

Kernel modules are shared object files that the kernel can load at runtime as needed. Compilation produces these files in directories that mirror the driver layout, for example drivers/net for network drivers. After make finishes, you still need to install these modules into the proper location under /lib/modules.

To do that, run:

sudo make modules_install

This copies the built modules into a versioned directory such as /lib/modules/6.9.0-custom1. It also runs helper tools to generate a modules dependency file, modules.dep, which the system uses to resolve which modules depend on others.

You can install the kernel image and associated files manually or use a helper target. Many distributions integrate with the install target:

sudo make install

This typically copies the kernel image, the System.map file, and the .config to /boot, naming them with the kernel version, then runs the appropriate bootloader update command such as grub-mkconfig or an equivalent wrapper used by your distribution.

If your distribution does not support this directly, or if you want finer control, you can manually copy arch/.../boot/bzImage to /boot/vmlinuz-<version> and then modify your bootloader configuration yourself. The precise steps vary between GRUB, systemd-boot, or other loaders, and are covered in the separate boot process chapter.

Using Distribution Tools and Packages

Many distributions provide helper tools that wrap kernel compilation in a packaging step. Instead of installing the compiled kernel directly, you build a package, for example a deb or rpm, that your package manager can track. This has the advantage that you can later remove the kernel cleanly with the package manager, and it keeps the installation layout consistent with distribution standards.

On Debian based systems, a popular helper is make deb-pkg. From the kernel source directory you can run:

make -j$(nproc) deb-pkg

This builds the kernel and produces several .deb files in the parent directory. These include the kernel image package, header packages, and internal tools. You then install the resulting kernel image package with the usual dpkg -i command. The packaging scripts take care of copying files to /boot and updating GRUB.

On RPM based systems, similar targets exist such as rpm-pkg:

make -j$(nproc) rpm-pkg

This creates an RPM package in the build tree or in configured RPM build directories. Installing it with dnf or rpm -i will register the new kernel with the package manager.

Using package based installation is especially attractive on servers or in environments that have configuration management. It lets you distribute the custom kernel in a controlled way and keeps the system in a state that automation tools can understand.

Installing and Updating the Bootloader

After you install the kernel files into /boot, either manually or via make install or a package, you must ensure that your bootloader knows about the new kernel. On systems that use GRUB2, the install step usually runs update-grub or an equivalent command automatically, which regenerates the GRUB configuration file and adds menu entries for the new version.

If your system does not update automatically, you may need to run the configuration command yourself. On Debian based systems this is often:

sudo update-grub

On other GRUB based distributions a commonly used command is:

sudo grub-mkconfig -o /boot/grub/grub.cfg

This scans /boot for installed kernels and generates entries accordingly. The new kernel will usually become the default entry, either as the newest version or according to your distribution’s policies.

If you use an alternative bootloader, such as systemd-boot, you update its configuration manually or via hooks that the distribution provides. Typically you create or edit a loader entry file that specifies the kernel image path, the initramfs path if used, and the kernel parameters. The details of that are part of the broader bootloader topic.

It is useful to keep at least one previous kernel installed and visible in the boot menu. If the new one fails to boot or exhibits regressions, you can select the older kernel from the bootloader menu at startup and return to a working environment.

Do not remove your distribution’s original kernel until you have thoroughly tested your custom kernel and have confirmed that at least one backup kernel entry is functional. Always maintain a fallback option in the bootloader.

Testing and Troubleshooting a New Kernel

After installation and bootloader configuration, you can reboot into your new kernel. During the reboot, select the appropriate entry from the boot menu, especially if your system still defaults to the distribution kernel. Once the system has started, you can verify which kernel is running with:

uname -r

The output should show the version string you configured, including any local suffix you added. This confirms that the system is using your custom build rather than a previous one.

Initially, check basic system functions. Confirm that your filesystems mount correctly, that your network interfaces appear, and that input and display devices behave as expected. If critical hardware is missing, it often indicates that the corresponding driver has been disabled or built as a module in a way that prevents it from loading at boot. You can inspect loaded modules with lsmod and search for available modules with find /lib/modules/$(uname -r). If a driver module exists but is not loaded, a simple modprobe command may bring it in. If the module is absent, you likely need to revisit the kernel configuration and enable the relevant option.

When a new kernel fails to boot entirely, the bootloader may still allow you to select an older kernel. Use that to return to a working system. You can then inspect the logs from the failed boot if the system wrote any, or take a photograph of the panic message on the console. Common boot failures originate from missing root filesystem support, incorrect storage controller drivers, or misconfigured kernel command line parameters that the bootloader passes.

Incremental changes make troubleshooting easier. Instead of changing many unrelated configuration options at once, adjust a small group of related settings, rebuild, and retest. This pattern helps you isolate which change introduced a problem. Between builds, you do not need to clean everything. You can generally rerun make -j$(nproc) with the new configuration and only the affected files will rebuild. However, if you encounter strange compilation errors or configuration mismatches, a clean build after make clean or make mrproper followed by reapplying your .config can resolve subtle issues.

If you keep multiple custom kernels, give each a distinct local version string and track their configuration files. Storing .config snapshots under different names and possibly committing them to a version control system such as Git can provide a record of what you changed and when. This habit turns kernel building into a reproducible process rather than a one time manual experiment.

Finally, read the kernel’s Documentation directory and the online documentation for more detailed descriptions of configuration options and subsystem behavior. Compiling a custom kernel is not only about producing a binary image but also about understanding the tradeoffs of enabling and disabling features. Over time, this practice gives deep insight into how Linux interacts with your hardware and your workloads.

Views: 9

Comments

Please login to add a comment.

Don't have an account? Register now!