Table of Contents
Conceptual Differences
At a high level, both containers and virtual machines (VMs) are ways to run applications in isolation on shared hardware. The key difference is how they achieve that isolation:
- Virtual machines virtualize the hardware. Each VM runs its own full operating system (OS) on top of a hypervisor.
- Containers virtualize at the OS level. Many containers share the same host kernel but have isolated user space environments.
This has deep implications for performance, density, startup time, and operational models.
Virtual Machines in a Nutshell
A VM looks and behaves like a full, separate computer:
- Includes:
- Virtual hardware (vCPU, virtual NICs, virtual disks)
- A complete guest operating system
- Applications running inside that OS
- Managed by a hypervisor (e.g. KVM, VMware ESXi, Hyper-V), which:
- Schedules VMs on physical CPUs
- Provides virtual devices and manages memory isolation
Key characteristics:
- Strong isolation: each VM has its own kernel and OS.
- Heavyweight: larger memory and disk footprint due to full OS per VM.
- Boot times similar to physical servers (seconds to minutes).
- Operational model: often managed as “pets” or “small servers,” each with its own lifecycle, patching, and configuration.
Containers in a Nutshell
Containers provide isolated environments that share the host OS kernel:
- Include:
- Application and its dependencies (libraries, runtimes, tools)
- A minimal user space filesystem
- Share:
- The host kernel (system calls, process scheduling, etc.)
Isolation is typically provided by:
- Linux namespaces (isolating processes, network, filesystems, etc.)
- cgroups (controlling CPU, memory, IO usage)
- Filesystem features (e.g. overlay filesystems)
Key characteristics:
- Lighter weight: no full OS in each container image.
- Very fast startup (often milliseconds).
- Designed for “cloud-native” style: immutable images, declarative deployment, many small units (“cattle” rather than “pets”).
Architecture Comparison
Stack Structure
A typical VM stack:
- Physical hardware
- Host OS or hypervisor OS (for type 2 hypervisors)
- Hypervisor
- Multiple VMs
- Inside each VM:
- Guest OS
- Applications
A typical container stack:
- Physical or virtual hardware
- Host OS
- Container runtime (e.g. containerd, CRI-O, Docker engine)
- Orchestration layer (e.g. Kubernetes/OpenShift)
- Multiple containers
- Inside each container:
- User space filesystem and application
The crucial difference: VMs duplicate the OS layer; containers share it.
Isolation Model
- VMs:
- Hardware-level isolation via hypervisor boundaries.
- A compromise inside one VM rarely directly affects another VM’s kernel.
- Very strong multi-tenant separation, suitable for untrusted workloads or mixed OSes.
- Containers:
- Process-level isolation within a single kernel.
- Attacks that exploit kernel vulnerabilities may affect all containers on the host.
- Additional hardening (SELinux, AppArmor, seccomp, user namespaces) is commonly used to strengthen isolation.
For OpenShift, this security model is especially important; it relies heavily on the underlying container isolation plus additional security controls.
Resource Usage and Performance
Resource Footprint
- VMs:
- Each VM reserves memory for its own OS and services.
- Guest OS processes (e.g. systemd, logging, agents) consume CPU and memory even when not running application workloads.
- Storage footprint includes OS images and updates.
- Containers:
- Share the host kernel; only application and its dependencies are included.
- Fewer background services; typically just the application.
Consequences:
- Higher density: more containers than VMs can usually run on the same hardware before saturating CPU, memory, or storage.
- More efficient consumption of resources for small, single-purpose services.
Startup Time
- VMs: need to boot a full OS; startup may take from several seconds to minutes.
- Containers: often just start one process in an already-running OS; startup is typically sub-second.
This speed difference is essential for:
- Horizontal autoscaling based on load.
- Fast CI environments and ephemeral development/testing environments.
- Short-lived batch jobs and serverless/event-driven patterns.
Performance Overhead
- VMs:
- Hypervisor introduces some overhead for I/O, CPU virtualization, and context switching.
- Modern virtualization is efficient, but there is still a gap vs. bare metal, especially for I/O-heavy workloads.
- Containers:
- Processes run almost directly on the host kernel.
- Overhead is mainly from isolation constructs (namespaces, cgroups) and storage layers (e.g. overlayfs).
- Performance is closer to bare metal for many workloads.
In HPC contexts (relevant later in the course), this lower overhead is a common reason to prefer containers where security and compliance allow it.
Operational and Lifecycle Differences
Image vs Machine Centric
- VMs:
- Often treated like persistent servers.
- You “SSH into the VM” and modify it in place (install packages, change configs).
- Golden images exist, but live drift from the base image is common.
- Patching typically means OS-level updates inside each machine.
- Containers:
- Built from images that are usually immutable once published.
- Lifecycle:
- Build image (from Dockerfile or similar).
- Push to registry.
- Deploy many containers from the same image.
- Updates usually mean building a new image and redeploying containers.
- Configuration is injected via environment variables, ConfigMaps, Secrets, etc. (covered elsewhere), not by manual changes inside running containers.
This image-based, immutable pattern is a foundation for how OpenShift and Kubernetes manage applications.
Scaling and Elasticity
- VMs:
- Scaling out means provisioning more VMs.
- Provisioning times are longer; requires OS configuration and sometimes manual steps or automation tooling (e.g. Ansible, cloud-init).
- Typically coarser-grained scaling (add/remove big units).
- Containers:
- Scaling out means starting more container instances (pods).
- Orchestrators like Kubernetes/OpenShift can rapidly add or remove replicas.
- Fine-grained scaling: scale a microservice from 2 to 200 instances quickly.
Elasticity is a key property of cloud-native applications, which this course will revisit when discussing scaling and high availability.
Management Tools and Interfaces
- VMs:
- Managed using hypervisor or cloud APIs (vSphere, OpenStack, AWS EC2, etc.).
- Configuration and orchestration frequently handled via config management tools (e.g. Ansible, Puppet, Chef).
- Containers:
- Managed via container runtimes and orchestrators (Kubernetes/OpenShift).
- Declarative model: you define desired state (e.g.
replicas: 3), and the system continuously reconciles to match it. - Networking, storage, security, and scaling are handled via Kubernetes/OpenShift abstractions rather than per-machine scripts.
Security and Isolation Trade-offs
Strength of Isolation
- VMs provide:
- Strong isolation ideal for hosting workloads from different organizations or with very different trust levels.
- The possibility to run different OSes (Linux, Windows) on the same hardware.
- Containers provide:
- Lighter isolation for processes sharing a kernel.
- A smaller attack surface when images are minimal and unprivileged.
- A stronger need for kernel-level hardening and platform-level policies (like OpenShift’s Security Context Constraints) to reduce risk.
Patch and Update Strategy
- VMs:
- OS-level security patches must be applied inside each guest OS.
- Requires patch management processes per VM or per VM group.
- Containers:
- Base images can be rebuilt with patched components.
- Re-deploying updated containers refreshes the environment quickly.
- Encourages frequent, automated refresh of runtime environments.
This pattern aligns well with CI/CD workflows, which you will explore in later chapters.
Use Cases: When to Prefer Which
Scenarios Favoring Virtual Machines
- Need to run multiple different OS types (e.g. Linux and Windows) on the same hardware.
- Strong tenant isolation where mutual trust is low or regulatory separation is strict.
- Legacy applications tightly bound to a specific OS environment, kernel modules, or system-level drivers.
- Full system control required (e.g. custom kernels, low-level tuning or appliances).
Scenarios Favoring Containers
- Microservices architectures and cloud-native applications.
- High-density environments where maximizing utilization is important.
- Workloads needing rapid scaling, quick start/stop, or short lifetimes.
- CI/CD environments where frequent rebuild and redeploy is expected.
- HPC or data/AI workloads that benefit from near bare-metal performance but also need packaging and reproducibility.
In OpenShift, you will almost always work with containers; VMs may appear behind the scenes as the infrastructure on which OpenShift itself runs.
Hybrid Approaches and Integration
Containers and VMs are not mutually exclusive:
- Containers often run inside VMs:
- Cloud providers typically give you VMs (e.g. EC2 instances), and on top of them you run Kubernetes/OpenShift clusters.
- This is the most common setup in public clouds.
- VMs can be managed alongside containers:
- Some platforms allow defining VMs as Kubernetes resources, providing a unified control plane.
- This helps migrate from VM-based architectures to container-based ones gradually.
- Specialized container technologies (e.g. “micro-VMs”) blend aspects of both:
- Offer stronger isolation by wrapping containers in very lightweight VMs.
- Used for multi-tenant or security-sensitive workloads where traditional containers are considered too weakly isolated.
Understanding this spectrum helps you see where OpenShift fits: it is a container-native platform that often runs on virtualized infrastructure, and in some cases can interoperate with VM-based workloads.
Summary of Key Differences
A concise comparison:
- Isolation level:
- VMs: hardware-level, separate kernels.
- Containers: OS-level, shared kernel.
- Resource footprint:
- VMs: heavier; full OS per VM.
- Containers: lighter; minimal user space only.
- Startup time:
- VMs: slower (seconds–minutes).
- Containers: fast (milliseconds–seconds).
- Operational model:
- VMs: machine-centric, often mutable.
- Containers: image-centric, usually immutable.
- Use cases:
- VMs: mixed OS, strong tenant isolation, legacy apps.
- Containers: cloud-native apps, microservices, high density, rapid scaling.
These differences are fundamental to understanding why platforms like OpenShift are built around containers rather than VMs, and how they enable modern cloud-native application patterns.