Table of Contents
What CI/CD Solves
Traditional software delivery often suffers from:
- “It works on my machine” problems
- Long delays between writing code and seeing it in production
- Manual, error‑prone release procedures
- Unclear visibility into what’s deployed where
CI/CD (Continuous Integration / Continuous Delivery / Continuous Deployment) addresses these by standardizing and automating how changes move from developer machines to production, typically using Linux servers as runners, build agents, or deployment targets.
Key Concepts and Terminology
Continuous Integration (CI)
Continuous Integration is about frequently merging code changes into a shared main branch and automatically validating them.
Core ideas:
- Single source of truth: a central repository (e.g., Git) with a main branch that is always in a working state.
- Frequent integration: small, incremental merges vs. rare, huge merges.
- Automated verification: every push or merge request triggers:
- Code checkout
- Build/compile (if applicable)
- Automated tests (unit, basic integration)
- Static analysis / linters
Goals:
- Detect integration issues early
- Provide quick feedback to developers
- Keep
main(ormaster) always green and deployable
Continuous Delivery (CD)
Continuous Delivery extends CI: after code is built and tested, it is always in a releasable state, and a deployment to production is:
- Reliable
- Repeatable
- Mostly automated
The final push to production is often triggered manually (e.g., pressing a button, approving a job).
Key idea: release any version at any time with low risk and low effort.
Continuous Deployment
Continuous Deployment goes a step further than Continuous Delivery:
- Every change that passes the pipeline (build, tests, checks) is automatically deployed to production without manual approval.
Useful when:
- You have strong automated tests and safeguards
- You need high release frequency (many times per day)
Pipeline
A CI/CD pipeline is the automated path that code changes follow:
- Code is pushed or merged
- Pipeline is triggered
- A series of stages (e.g., build → test → package → deploy) run
- Each stage runs one or more jobs (e.g., unit tests, integration tests, code quality checks)
- If all stages succeed, the result is an artifact or a deployment
On Linux, these jobs usually run in containers or on Linux build agents.
Artifacts
Artifacts are build outputs that move between stages:
- Compiled binaries
- Container images
- Packages (
.deb,.rpm) - Bundled web assets
- Test reports, coverage data
Principle: build once, use many times (test, then deploy the same artifact).
CI/CD Lifecycle at a High Level
Consider a typical lifecycle for a code change:
- Commit & Push
- Developer works locally
- Pushes to a feature branch
- CI Trigger
- Push or merge request triggers CI job(s) on a Linux runner
- Build Stage
- Fetch dependencies
- Compile/build application
- Produce build artifacts
- Test Stage
- Run unit tests
- Run basic integration tests
- Optionally run static analysis, style checks, security scans
- Package Stage
- Bundle application into:
- A container image
- A Linux package
- A signed binary or archive
- Deploy to Non‑Production
- Automatically deploy to a test or staging environment
- Run further checks (e2e tests, performance smoke tests)
- Production Deployment
- Continuous Delivery: manual approval/trigger
- Continuous Deployment: automatic if previous stages passed
- Post‑Deployment Verification
- Health checks, logs, metrics
- Rollback if needed
Linux is typically the underlying OS at every step: runners, build environments, test servers, and production hosts.
Core Principles of Effective CI/CD
1. Small, Frequent Changes
- Prefer many small merges over rare large ones.
- Smaller changes:
- Are easier to review and test
- Fail faster and are simpler to roll back
- Reduce merge conflicts
2. Fast Feedback
- CI jobs should finish quickly (ideally within minutes).
- Slow pipelines encourage:
- Less frequent commits
- Developers ignoring failures
- Techniques:
- Run the fastest, most critical tests first
- Parallelize jobs on multiple Linux runners
- Split tests into tiers (smoke, full regression, etc.)
3. Single Source of Truth and Declarative Config
- The pipeline is defined as code (usually YAML) stored in the same repository.
- Benefits:
- Versioned with the application
- Reproducible across Linux hosts
- Peer reviewable like other code
Principle: everything that can be automated (build, test, deployment logic) should live in declarative configuration, not be manual or ad‑hoc scripts run from someone’s laptop.
4. Build Once, Promote Many Times
Avoid rebuilding code at each stage:
- Build the artifact once in CI
- Reuse the same artifact for:
- Testing
- Staging deployment
- Production deployment
Why:
- Ensures the exact thing you tested is what you deploy.
- Reduces non‑reproducible builds and environment drift.
On Linux, this often means:
- Building a container image once
- Pushing it to a registry
- Referencing the same image tag/digest in every environment
5. Environments and Promotion
An environment is a deploy target (e.g., test, staging, production).
Key ideas:
- Each environment has its own config (e.g., URLs, credentials), but:
- The artifact should be the same
- Promotion flow:
dev→test→staging→production- Every promotion step is executed via the pipeline, not manually on servers.
Linux hosts may differ per environment, but the deployment process should be consistent and automated.
6. Idempotent and Repeatable Deployments
A deployment process is idempotent if running it multiple times leads to the same result, without harmful side effects.
Principles:
- Deployment scripts should:
- Handle partial failures gracefully
- Not rely on manual pre‑steps (e.g., “run this command once manually”)
- All changes to Linux systems (packages, services, configs) should be:
- Scripted
- Tested
- Reproducible
CI/CD tools and configuration management (covered elsewhere) work together to enforce this.
7. Test Automation as a Gate
CI/CD relies heavily on automated tests:
- A pipeline fails if tests fail
- No deployment to higher environments if earlier tests are red
- Minimum requirement for serious CI:
- A reliable suite of automated tests that can run unattended on Linux runners
Common test gating patterns:
- Only allow merges to
mainif: - All CI checks pass
- Required tests have succeeded
- Only allow promotion to production if:
- Integration and smoke tests pass in staging
8. Security and Compliance in the Pipeline
Security checks are integrated into CI/CD:
- Static Application Security Testing (SAST)
- Dependency vulnerability scanning
- Container image scanning
- Secret detection (ensuring no keys/passwords in code)
Principles:
- Security as an automated gate — not a late, manual step.
- Secrets (API keys, passwords) never stored in code:
- Use CI/CD secret stores or environment variables
- Use Linux credential stores or secret managers when deploying
9. Observability and Traceability
You need visibility into:
- Which version is running where
- Who approved a deployment
- Why a build failed
CI/CD supports this with:
- Build logs
- Deployment logs
- Timestamps, commit hashes, and tags
- Links between:
- Commits
- Pipeline runs
- Releases
- Issues/tickets
In Linux environments, log files, systemd logs, and monitoring agents complement CI/CD system logs for full traceability.
10. Rollback and Failure Handling
CI/CD design assumes things will fail and prepares for it:
- Automated rollback strategies:
- Redeploy a previously known‑good artifact
- Switch traffic back in blue‑green deployments
- Canary deployments:
- Send a small portion of traffic to the new version first
- Clear policies:
- What conditions trigger rollback (e.g., health checks, error rates)
- Who can initiate rollback
Principle: deployment should be reversible and quick.
Common CI/CD Workflow Patterns
Branching and Integration Patterns
- Feature branches:
- Each feature/bugfix in its own branch
- CI runs on every push
- Merge only when green
- Trunk‑based development:
- Short‑lived branches
- Frequent merges into
main - Feature flags used to hide incomplete features
CI/CD supports either style; Linux runners simply execute the jobs defined for each branch.
Build and Test Strategies
Typical pipeline stages:
- lint – fast static checks (e.g., formatting, style)
- unit-test – quick tests focused on small pieces of code
- integration-test – require services (databases, APIs), often in containers on Linux
- e2e or system tests – more expensive, may run only on specific branches
Principle: cheapest checks first, fail early.
Deployment Strategies
Different strategies manage risk when deploying to Linux servers:
- Recreate: stop old version, start new version (simple, but downtime).
- Rolling: update a few instances at a time; others continue serving.
- Blue‑Green:
- Two environments: Blue (current) and Green (new).
- Deploy to Green → switch traffic → keep Blue as rollback.
- Canary:
- Start with a small subset of servers or users
- Expand if metrics remain healthy
CI/CD orchestrates these strategies, while Linux tools (systemd, orchestrators, load balancers) execute them.
CI/CD and Linux: Practical Considerations
Although tool‑specific details are covered separately, some Linux‑specific aspects are important to the principles:
- Runners/agents:
- Typically Linux virtual machines, containers, or bare‑metal
- Need:
- Required compilers/toolchains
- Access to networks, package repositories, container registries
- Environment consistency:
- Use containers or predefined images to keep build environments stable.
- Avoid “snowflake” Linux servers with unique, undocumented tweaks.
- Permissions and security:
- Use least privilege for CI/CD service accounts on Linux hosts:
- Limit
sudoaccess - Use SSH keys or tokens stored securely in CI/CD settings
- Artifacts and registries:
- Store artifacts in package registries or artifact repositories
- Use image registries for containerized workloads
- Ensure Linux hosts can pull from these in a secure way (TLS, auth)
Designing a Simple CI/CD Pipeline (Conceptually)
To tie the principles together, consider this conceptual minimal pipeline for a web app deployed on Linux:
- Trigger: push to
mainor a merge request. - Build:
- Run on a Linux runner
- Install dependencies
- Build the app
- Create a container image
- Test:
- Run unit tests inside the same environment
- If any test fails → pipeline fails → no deployment
- Package & Push:
- Push container image to a registry
- Tag image with commit hash and maybe a semantic version
- Deploy to Staging:
- SSH or orchestrated deployment to Linux staging servers
- Run smoke tests
- Approval for Production (Continuous Delivery) or automatic deployment (Continuous Deployment):
- Apply deployment strategy (rolling/blue‑green)
- Monitor health and metrics
- Post‑Deployment:
- Mark the deployment as successful
- If issues arise, trigger rollback using previous image/tag
This pipeline exemplifies CI/CD principles: small changes, automated verification, reproducible builds, controlled promotion, and safe rollback — all running on Linux infrastructure.
Summary of CI/CD Principles
- Integrate code changes frequently and automatically (CI).
- Keep software always in a releasable state (Continuous Delivery).
- Optionally deploy automatically to production on every successful change (Continuous Deployment).
- Define pipelines as code and run them on consistent Linux environments.
- Automate builds, tests, packaging, and deployments.
- Build artifacts once and promote them through environments.
- Use automated tests and security scans as gates.
- Make deployments idempotent, observable, and reversible.
These principles form the conceptual foundation; later chapters focus on implementing them with specific CI/CD tools and Linux‑centric workflows.