Table of Contents
From Development Environment to Cluster Execution
Running an application on an HPC cluster is less about typing one command and more about moving through a small, repeatable workflow. You develop and test the code somewhere, move it and its inputs to the cluster, request resources through the scheduler, run, and then retrieve results. This chapter focuses on that practical path and on the day‑to‑day habits that make cluster runs reliable.
You have already seen what clusters are, what job schedulers do, and how typical workflows look at a high level. Here the emphasis is on the concrete steps and conventions you will actually use when you run applications in a multiuser cluster environment.
Preparing your application for the cluster
On a shared system you rarely compile and run in the same casual way as on a laptop. Before you ever submit a job you should have a cluster‑ready build and a clear separation between source, build artifacts, inputs, and outputs.
On most clusters you will log in to a dedicated login node. Compilation, setup, and small test runs are allowed there, while heavy production runs must happen on compute nodes through the scheduler. This separation is central. Even if a compiler lets you run mpirun on the login node, you should not do so for anything beyond trivial tests.
A common pattern is to keep a directory hierarchy such as:
$HOME/projects/your_code/src for source code,
$HOME/projects/your_code/build for build products,
$HOME/projects/your_code/runs for job scripts, inputs, and outputs.
Within the build directory you run the compilers and build system that the cluster provides. For example you might first load modules for compilers and MPI, then configure: