Table of Contents
What CMake Is and Why It’s Popular in HPC
CMake is a cross-platform build system generator. It does not compile your code directly; instead, it generates native build files (like Makefiles or Ninja build files) tailored to your platform and toolchain.
For HPC, CMake is useful because:
- It works across different clusters (Linux distributions, compilers, MPI stacks).
- It simplifies building large, multi-language projects (C, C++, Fortran, CUDA).
- It makes it easier to manage compiler options, dependencies, and build types.
You typically interact with CMake in two stages:
- Configuration: Tell CMake what you want to build and how (CMake scripts).
- Generation / Build: CMake generates a build system (e.g. Makefiles), then you run that system to compile.
A typical workflow:
mkdir build
cd build
cmake ..
make -jBasic CMake Usage in an HPC Context
Out-of-Source Builds
On shared systems, keeping build artifacts separate from source is common practice. CMake encourages this via out-of-source builds:
# In your project root directory
mkdir build
cd build
cmake .. # configure the project
cmake --build . # or: make, ninja, etc.Benefits:
- Source tree stays clean (no stray object files).
- Easy to have multiple builds of the same source (e.g. debug vs optimized):
build-debug/build-release/build-gpu/
A Minimal CMake Project
Suppose you have a simple C program:
# CMakeLists.txt (in project root)
cmake_minimum_required(VERSION 3.18)
project(MyHPCApp LANGUAGES C)
add_executable(my_hpc_app main.c)Then:
mkdir build
cd build
cmake ..
cmake --build .
This will detect your default C compiler on the cluster and build my_hpc_app.
Organizing a Project with CMake
Project Structure
A common layout for HPC codes:
MyHPCApp/
CMakeLists.txt
src/
CMakeLists.txt
main.c
solver.c
solver.h
include/
myhpcapp/config.h
Top-level CMakeLists.txt:
cmake_minimum_required(VERSION 3.18)
project(MyHPCApp LANGUAGES C)
add_subdirectory(src)
src/CMakeLists.txt:
add_executable(my_hpc_app
main.c
solver.c
)
target_include_directories(my_hpc_app
PRIVATE
${CMAKE_SOURCE_DIR}/include
)Key ideas:
add_subdirectorylets you split configuration across folders.target_include_directoriesattaches include paths to specific targets, avoiding global state.
Selecting Compilers on HPC Systems
On clusters, compilers are often provided via modules (covered elsewhere). You typically:
module load gcc/13.2.0 # example
mkdir build-gcc
cd build-gcc
cmake ..
CMake normally uses the compiler in your environment (CC, CXX, FC), at configuration time. To explicitly select compilers:
mkdir build
cd build
cmake .. \
-DCMAKE_C_COMPILER=gcc \
-DCMAKE_CXX_COMPILER=g++ \
-DCMAKE_Fortran_COMPILER=gfortranDo this only on the first configuration of a build directory; changing compilers mid-build directory is not supported.
Build Types and Optimization Levels
CMake has the notion of a “build type” for single-config generators like Makefiles:
Debug– no optimization, debug symbolsRelease– optimization, no debug infoRelWithDebInfo– optimization + debug symbolsMinSizeRel– optimized for size
Select a build type when configuring:
mkdir build-debug
cd build-debug
cmake .. -DCMAKE_BUILD_TYPE=Debug
cmake --build .
cd ..
mkdir build-release
cd build-release
cmake .. -DCMAKE_BUILD_TYPE=Release
cmake --build .You can inspect flags with:
cmake -L
or by printing variables in CMakeLists.txt using message().
Adjusting Compiler Flags
For basic adjustments:
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wall")
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} -O3")More modern (target-specific) approach:
target_compile_options(my_hpc_app PRIVATE -Wall -Wextra)This lets you tune flags per-target (e.g. different flags for a GPU version).
Linking Libraries and Dependencies
CMake manages libraries through targets and find_package.
Linking a System Library (e.g., `m` for math)
add_executable(my_hpc_app main.c solver.c)
target_link_libraries(my_hpc_app PRIVATE m)Using `find_package` for External Libraries
Many HPC libraries ship with CMake config files or Find modules. Example: using MPI.
Basic pattern:
find_package(MPI REQUIRED)
add_executable(my_mpi_app main_mpi.c)
target_link_libraries(my_mpi_app PRIVATE MPI::MPI_C)Then, in the build directory:
module load mpi/openmpi # or your system's MPI
cmake ..
cmake --build .
CMake uses the MPI installation from the environment when locating MPI::MPI_C.
MPI, OpenMP, and Other HPC Features
Enabling OpenMP
If your code uses OpenMP pragmas, you can request OpenMP support:
find_package(OpenMP REQUIRED)
add_executable(omp_app main_omp.c)
target_link_libraries(omp_app PRIVATE OpenMP::OpenMP_C)
This will add the right compile and link flags (e.g. -fopenmp, /openmp) depending on your compiler.
Combining MPI and OpenMP
Typical for hybrid MPI+OpenMP codes:
find_package(MPI REQUIRED)
find_package(OpenMP REQUIRED)
add_executable(hybrid_app main_hybrid.c)
target_link_libraries(hybrid_app
PRIVATE
MPI::MPI_C
OpenMP::OpenMP_C
)Using CMake with Different HPC Toolchains
Clusters often provide multiple toolchains (e.g. GCC, Intel oneAPI, LLVM). CMake adapts as long as you configure in a clean build directory with the correct environment loaded.
Examples:
Using Intel oneAPI (example)
module load intel-oneapi-compilers/2025 # example name
mkdir build-intel
cd build-intel
cmake .. \
-DCMAKE_C_COMPILER=icx \
-DCMAKE_CXX_COMPILER=icpx \
-DCMAKE_Fortran_COMPILER=ifx
cmake --build .
You can set Intel-specific options via target_compile_options or generator expressions, but the basic CMake usage remains the same.
Using LLVM/Clang
module load llvm/18.1 # example
mkdir build-llvm
cd build-llvm
cmake .. -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++
cmake --build .Handling CUDA and GPUs (Brief Overview)
Many GPU-enabled HPC codes use CMake to manage CUDA builds. Basic (high-level) pattern:
cmake_minimum_required(VERSION 3.18)
project(MyGPUApp LANGUAGES CXX CUDA)
add_executable(gpu_app main.cu)
set_target_properties(gpu_app PROPERTIES
CUDA_SEPARABLE_COMPILATION ON
)And then:
module load cuda/12.4 # example
mkdir build-gpu
cd build-gpu
cmake ..
cmake --build .
This allows CMake to call nvcc (or the appropriate CUDA compiler) and manage host/device compilation.
Dealing with Installation Prefixes and Run-Time Use
In shared environments, you might want to “install” your compiled code into a directory for later use:
add_executable(my_hpc_app main.c)
install(TARGETS my_hpc_app DESTINATION bin)Configure with:
mkdir build
cd build
cmake .. -DCMAKE_INSTALL_PREFIX=$HOME/local/myhpcapp
cmake --build .
cmake --install .
Your executable will end up in $HOME/local/myhpcapp/bin, which you can add to PATH.
Practical Tips for Using CMake on Clusters
- Always use separate build directories per configuration (debug/release, compiler, GPU vs CPU).
- Load modules before configuring so CMake detects the right compilers and libraries.
- Don’t reuse a build directory with a different compiler; delete it or create a new one.
- Check CMake version on the cluster with
cmake --version; some features need newer CMake. - Use
cmake --build . -- -j8(or similar) to parallelize builds when allowed by the system.
These basics are usually enough to:
- Build third-party HPC software that uses CMake.
- Start organizing your own codes in a portable, maintainable way.
- Integrate MPI, OpenMP, CUDA, and other HPC components in a consistent build system.