Kahibaro
Discord Login Register

1.3 Virtual Machines vs Containers

Two Ways to Package and Run Software

Virtual machines and containers both try to solve the same problem. You want an application to run the same way everywhere, no matter what the underlying computer looks like. They do this in very different ways, and that difference affects performance, isolation, resource usage, and how you use Docker in practice.

In this chapter you will focus on the big picture comparison, not the low level implementation details. The goal is to give you an intuition for when something is “a VM problem” and when it is “a container problem” before you dive deeper into Docker itself later in the course.

What a Virtual Machine Really Is

A virtual machine, often shortened to VM, pretends to be a separate physical computer that happens to live inside another computer.

On a physical server or laptop, a special program called a hypervisor manages virtual machines. Each VM has its own virtual hardware. It gets virtual CPUs, virtual memory, virtual disks, and virtual network cards. On top of this fake hardware you install a full operating system such as Windows, Ubuntu, or another Linux distribution.

Inside the VM, the guest operating system behaves as if it owns a real computer. It has its own kernel, its own drivers, and its own userspace programs. From the point of view of that guest system, the host and other VMs do not exist.

This strong illusion of a full computer gives you strong isolation. It is difficult for a process inside one VM to see or affect processes in another VM directly, because there is a full operating system boundary between them.

What a Container Really Is

A container works at a different level. Instead of pretending to be a full computer with its own hardware, a container is more like a packaged process or a small group of processes on the same operating system.

All containers on a host share the same operating system kernel. They do not boot their own kernel, and they do not run their own device drivers. Instead, the host kernel splits resources into separate spaces, for example separate views of the filesystem, processes, and networking. From inside the container it looks like it has its own environment, but under the hood there is only one kernel doing the work.

The container image provides the user space around your application. That includes libraries, runtimes such as Python or Node.js, and your application code and configuration. The container runtime, such as the Docker Engine, puts this user space into an isolated environment and starts your application process inside it.

Because there is no extra full operating system to boot, containers usually start very quickly and require fewer resources than VMs for the same workload.

Layering: VMs on Hardware, Containers on an OS

It helps to think in terms of layers. Start at the bottom with physical hardware, such as CPU, memory, disk, and network.

With virtual machines, the typical stack looks like this. Hardware at the bottom, then a host operating system, then the hypervisor. On top of the hypervisor you run several guest operating systems as VMs, and inside each guest OS you run your applications.

With containers, the stack looks simpler. Hardware, then the host operating system, then the container runtime such as Docker. On top of that runtime you run multiple containers. Each container brings only what it needs for the user space environment, not a whole extra kernel.

Because VMs duplicate entire operating systems, and containers share one, containers usually use less memory and disk space for the same number of applications.

Resource Usage and Density

The different layers have a direct effect on how many workloads you can fit on one machine.

When you run several VMs, each VM needs memory and CPU time for both the guest operating system and your actual application. The disk image for the VM must contain a full OS installation plus your software. If you want to run ten copies of the same application, you may end up with ten full operating systems, one in each VM.

With containers, the main shared kernel and host operating system are reused for all containers. Each container still uses CPU, memory, and disk, but you do not pay for a full OS per application. The image typically contains only your app and its dependencies.

This leads to higher density. You can often run more containers than VMs on the same hardware before you run out of resources. Starting a container is also much faster than booting a full VM. For tasks such as scaling a web service quickly, or spinning up short lived jobs, that speed difference is important.

Key idea: VMs duplicate the operating system, containers share it. This makes containers lighter, faster to start, and more resource efficient than VMs for many application workloads.

Isolation and Security Considerations

Isolation is one area where virtual machines and containers differ in important ways.

A VM has its own kernel. If something goes wrong inside the VM, for example a misbehaving application or a compromise, the attacker must usually break out of the guest operating system and then bypass the hypervisor to reach the host. This layered structure gives strong isolation that has been used and tested for many years in cloud environments.

Containers share a kernel with the host. Isolation is created by operating system features that limit what a process can see and do. In normal cases this is enough to separate applications and teams. However, if there is a vulnerability in the shared kernel or in configuration of the container runtime, it may be easier for an attacker to affect other containers or the host than it would be across VM boundaries.

This does not mean containers are unsafe. It means that the security model is different. The shared kernel is the main line of defense, so kernel hardening, strict access controls, and up to date patches become very important. Running containers as non root, choosing minimal images, and reducing the number of privileges are some of the countermeasures that you will see later in this course.

For very strict isolation needs, such as running untrusted code from many different customers on one machine, some organizations still prefer VMs or even a combination of VMs and containers.

Portability and Consistency

Virtual machines and containers also address portability at different levels.

A virtual machine lets you package an entire machine image. The same image can often run on any hypervisor that supports that format, given compatible hardware and virtualization features. You can move a VM from one host to another and it will behave the same, because it carries its own operating system and drivers. However VM images are usually large, which makes them slower to move and copy.

A container image packages your application with its user space dependencies. Inside the container you know exactly which runtime and libraries you get, and that environment behaves the same no matter where you run it, as long as the host kernel and architecture are compatible. This gives you a consistent environment for development, testing, and production, which is one of the main reasons Docker became popular.

The limitation is that containers depend on a compatible kernel. For example, a Linux container image expects a Linux kernel under it. You cannot run that image directly on a Windows kernel without some form of extra virtualization or a compatibility layer provided by the host or platform.

Operating System Compatibility

Virtual machines are flexible with operating systems because they emulate hardware. On top of one physical server you can run a Windows VM, a Linux VM, and a BSD VM together. Each guest brings its own kernel that knows how to work with the virtual hardware.

Containers do not emulate hardware. They reuse the host kernel, so containers must be built for the kernel type and CPU architecture of the host. A Linux container needs a Linux kernel. A Windows container needs a Windows kernel. This means you usually cannot mix host and container OS families freely.

In practice, this leads to patterns such as running a single Linux host and many Linux containers, or a single Windows host and many Windows containers. When people want to build and run Linux containers on Windows or macOS laptops, tools such as Docker Desktop start a lightweight Linux virtual machine behind the scenes. The containers run inside that VM so they can access a Linux kernel.

When VMs Are More Suitable

Even in a world where Docker and containers are everywhere, virtual machines still have an important role.

VMs shine when you need full control over an entire operating system. For example, if you need to test how your application interacts with kernel modules, device drivers, or low level networking, a container is not enough because the kernel is shared. With a VM you can change the kernel version, kernel settings, and drivers without affecting anything else.

VMs also remain common as a security boundary in multi tenant clouds. A cloud provider might place each customer in their own VM, then the customer may choose to run Docker containers inside that VM. The VM provides an outer isolation layer, and containers provide a convenient way for the customer to manage their own apps.

Some legacy applications are tightly coupled to the full operating system, expect certain system services, or modify system wide configuration in ways that are not container friendly. Those can be easier to host in a VM than to rewrite or repackage into containers.

When Containers Are More Suitable

Containers are usually a better fit when your main goal is to package and ship applications, not full machines.

They work particularly well for microservices, APIs, web applications, and background workers that can be modeled as processes with clear inputs and outputs. Developers can build an image on their laptop, test it, push it to a registry, and run the same image in staging and production. This reduces the classic “it works on my machine” problem.

Because containers start quickly, they are also useful for jobs that run briefly, such as batch tasks and scheduled scripts. They can appear when needed and disappear when finished, consuming resources only while they run.

The lightweight nature of containers also fits continuous integration and continuous delivery pipelines. Many projects use containers to define reproducible build environments, where each build or test run starts from a clean image.

How Docker Relates to VMs and Containers

Docker focuses on containers, not on virtual machines, but you will often see both used together.

On a traditional Linux server, Docker runs containers directly on top of the Linux kernel. In this setting, there is no VM involved for the containers themselves.

On developer laptops and some hosted platforms, you may not see a Linux kernel directly. For example, Docker on Windows or macOS often starts a small Linux virtual machine and runs all your Linux containers inside that VM. From your point of view you still use Docker commands and work with containers. The intermediate VM layer is just an implementation detail that provides the Linux kernel that containers need.

In larger deployments, a typical pattern is to use VMs as the unit that the cloud provides, then install Docker or another container runtime inside those VMs. Orchestrators that manage large fleets of containers usually schedule containers onto nodes that are themselves virtual machines provided by a cloud service. In this model you benefit from both isolation levels. VMs separate tenants or big environments, while containers package and isolate individual applications.

Understanding this relationship is important for the rest of the course. When you run a Docker container, you are not creating a VM, you are starting a process in an isolated user space that shares a kernel with the host or with an underlying VM.

Mental Model Summary

To keep the difference clear in your mind, think of VMs and containers in everyday terms.

A virtual machine is similar to renting another whole apartment inside a building, complete with its own walls, plumbing, and electrical system. It looks and behaves like a separate home. You can renovate it and change its infrastructure without bothering your neighbors, but you pay for the full space.

A container is similar to setting up a fully furnished room inside a shared apartment. Each room has its own furniture and decor, and the occupants cannot see into each other’s rooms. However, they all share the same walls, plumbing, and electricity. It is cheaper and faster to prepare a new room than a whole new apartment, but they all depend on the same underlying infrastructure.

Keeping this image in mind will help you choose the right tool as you move into the rest of the course, where you will work with Docker containers in practice and see how they compare to traditional VM based setups.

Views: 5

Comments

Please login to add a comment.

Don't have an account? Register now!