Table of Contents
Overview
PART V focuses on running Linux as production infrastructure: servers that must be reliable, secure, observable, and maintainable over long periods. This part assumes you are comfortable with the command line and basic administration, and it shifts your mindset from “using Linux” to “operating Linux services for others.”
You will not learn “what a web server is” or “how to install Linux” here; those belong to earlier parts. Instead, you will learn how to design, configure, and run common network services at a professional level, and how to think about availability, scale, and security.
By the end of this part you should be able to:
- Deploy and harden common server-side components (web, database, mail, DNS/DHCP).
- Design basic high-availability and load-balanced setups.
- Understand and implement key operational patterns like backups, monitoring, and failover.
This part is divided into several major topics, each of which has its own chapter.
Mindset: From “Machine Admin” to “Service Owner”
Up to now, the focus has mostly been on managing a single system. In server administration, the unit of responsibility is the service, not the individual host.
Key mindset shifts:
- Availability over convenience
- Changes are planned, documented, and reversible.
- Maintenance windows and rollback plans become routine.
- Reproducibility over manual tweaks
- Configuration belongs in version control.
- Environments should be rebuildable from documented steps or automation.
- Defense in depth instead of ad‑hoc security
- Services are exposed to the Internet or untrusted clients.
- Network, system, and application security need to work together.
- Observability instead of guesswork
- Logs, metrics, health checks, and alerts become first‑class tools.
- You design services to be inspectable and monitorable.
Each chapter in this part explores these ideas in the context of a specific class of service.
Topics Covered in This Part
Web Servers
You will learn how to expose HTTP/HTTPS services on Linux using common web servers and how to operate them in production-like scenarios.
Specific focus areas in this part:
- Apache basics
- Running Apache as a general-purpose web server.
- High-level configuration structure, modules, and virtual hosts.
- Nginx basics
- Using Nginx as a performant, event-driven web server.
- Serving static content and proxying to application backends.
- Virtual hosts
- Hosting multiple sites or applications on the same server.
- Name-based vs IP-based separation and related configuration patterns.
- SSL/HTTPS
- Terminating TLS at the web server.
- Certificate installation, renewal workflows, and secure defaults.
- Reverse proxy concepts
- Fronting application servers (e.g., Python, PHP, Node.js) with Apache or Nginx.
- Basic load distribution and connection handling at the edge.
The goal is not to teach full web development, but to make you able to deploy and expose applications reliably and securely.
Database Servers
This topic covers relational database services from the Linux administrator’s perspective, distinct from a DBA or application developer role.
You will cover:
- MySQL/MariaDB
- Running and managing these databases as network services.
- Basic configuration relevant to performance and reliability.
- PostgreSQL
- Operating Postgres instances on Linux.
- Common operational tasks: connections, storage, and logging.
- Database backup strategies
- Logical vs physical backups.
- Point-in-time recovery concepts and backup validation.
- Database hardening
- Restricting access at the network and user levels.
- Minimizing default attack surface and securing configuration.
Here the emphasis is on keeping data safe, available, and recoverable, not on SQL query design.
Email Services
Email services are a classic but still important part of server administration. They combine networking, identity, and security concerns.
You will learn:
- SMTP fundamentals
- How email moves between servers.
- Roles of MTAs (Mail Transfer Agents) and delivery paths.
- Configuring Postfix
- Running a modern MTA on Linux.
- Basic routing, relaying, and local delivery patterns.
- IMAP/POP3 with Dovecot
- Providing mailbox access for users.
- Integrating with an MTA to serve full email accounts.
- Spam filtering systems
- Using external tools and content filters to reduce spam.
- Basic pipeline design for message scanning.
- Email routing concepts
- MX records, relay hosts, and smarthosts.
- Handling inbound vs outbound mail for domains.
The focus here is on building and maintaining a functional, reasonably secure mail infrastructure rather than deep anti-spam engineering.
DNS and DHCP
DNS and DHCP are foundational services for any network. Misconfigurations here can make entire environments unreachable.
You will study:
- Understanding DNS
- Core concepts relevant to operating DNS on Linux.
- How queries are resolved and served.
- Setting up BIND
- Running an authoritative or caching DNS server.
- Configuration approaches specific to Linux deployments.
- DNS zones and records
- Designing and maintaining zone files for domains you manage.
- Using record types appropriate to common server scenarios.
- DHCP server configuration
- Assigning IP addresses and network parameters automatically.
- Coordinating DHCP with DNS where appropriate.
- DNS troubleshooting
- Practical debugging tools and steps when name resolution fails.
- Identifying issues in your own zone vs upstream problems.
These skills are essential for operating more complex networks and multi‑host environments.
Load Balancing
As services grow, you often need multiple backend instances. Load balancers sit in front and distribute traffic.
This part will cover:
- HAProxy fundamentals
- Using HAProxy as a dedicated TCP/HTTP load balancer.
- Basic configuration, health checks, and backend management.
- Nginx as a load balancer
- Leveraging Nginx’s proxy capabilities for simple balancing.
- Where Nginx is sufficient vs when you want a specialized tool.
- Session persistence
- Strategies to keep users bound to the same backend when necessary.
- Trade-offs between stickiness and scalability.
- High availability concepts
- Basic patterns for avoiding single points of failure at the load balancer layer.
- Integration with failover tools and redundant infrastructure.
The emphasis is on practical, minimal configurations that solve real problems without unnecessary complexity.
Clustering and High Availability
Beyond single load balancers, production environments often need cluster-level resilience. This topic introduces Linux-based HA stacks.
You will learn:
- Failover principles
- Active/passive vs active/active setups.
- Heartbeats, quorum, and split-brain risks at a conceptual level.
- Pacemaker
- Managing cluster resources and constraints on Linux.
- Declarative control of which node should run what.
- Corosync
- Providing messaging and membership for a cluster stack.
- Basic configuration topics specific to HA environments.
- Cluster resource management
- Defining services in a cluster-aware way.
- Handling start/stop/order constraints and resource migration.
- Distributed filesystems
- Using shared storage to support HA services.
- High-level overview of when and why distributed filesystems are needed.
Here the focus is on understanding and using existing HA tools, not on designing distributed algorithms from scratch.
Skills You Should Aim to Develop
As you work through the individual chapters in this part, keep an eye on the cross-cutting capabilities that distinguish expert server administrators:
- Designing service topologies
- Choosing where to put web servers, databases, and load balancers.
- Understanding trust boundaries and failure domains.
- Standardizing and documenting configurations
- Using consistent directory layouts, naming, and comments.
- Keeping configuration under version control.
- Implementing backups and recovery plans
- Ensuring each critical service has tested recovery procedures.
- Knowing how long restoration will take and what data might be lost.
- Monitoring and logging
- Ensuring every service emits useful logs and metrics.
- Integrating them into centralized monitoring, even if the tooling is covered in other parts of the course.
- Security as part of normal operations
- Applying principle of least privilege to services and network access.
- Keeping external exposure as small and controlled as possible.
Prerequisites and Assumptions
This part assumes you:
- Are comfortable with basic Linux CLI operations and shell usage.
- Can manage systemd services, users, permissions, and basic networking.
- Understand how to install software via your distribution’s package manager.
- Have basic knowledge of TCP/IP, ports, and client/server concepts.
Those topics are covered in earlier parts; here you focus on assembling them into robust, multi-component server setups.
How to Approach This Part
To get the most value:
- Practice on real or virtual servers
Spin up VMs or cloud instances and treat them as real services, not just test boxes you can constantly rebuild without thought. - Progress from simple to layered setups
- Start with a single web server.
- Add TLS.
- Introduce a separate database.
- Then add load balancing or clustering.
- Emulate realistic constraints
Pretend you are responsible to users: plan maintenance, avoid reckless changes, and keep written notes of configurations and procedures.
Each subsequent chapter in PART V zooms into one category of server technology while reinforcing the same professional operating practices.