Kahibaro
Discord Login Register

4.7.5 Container networking

Understanding Container Networking Models

Container networking is about how containers reach each other, the host, and the outside world. Different runtimes and orchestrators implement different models, but the core patterns are shared.

The main container networking models you’ll encounter:

Each has trade-offs in simplicity, performance, isolation, and routability.

Network Namespaces and Virtual Interfaces (Quick Context)

Containers usually live in their own network namespace. Each namespace has its own:

To connect containers to the host and outside networks, Linux uses virtual Ethernet pairs (veth) and bridge interfaces:

You will see these when inspecting container networks with tools like ip addr, ip link, and brctl or bridge.

Bridge / NAT Networking

Bridge networking is the default for many container engines (e.g., Docker’s bridge network).

How it works

  1. The runtime creates a Linux bridge on the host, e.g. docker0.
  2. The bridge is assigned a private subnet, e.g. 172.17.0.0/16.
  3. Each container gets:
    • Its own network namespace
    • A veth interface inside the container (e.g. eth0)
    • The other end of the veth attached to the bridge in the host namespace
  4. The host performs NAT (masquerading) for containers to reach the outside world.

Typical outbound traffic flow:

$$
\text{container} \rightarrow veth\_c \leftrightarrow veth\_h \rightarrow \text{bridge (docker0)} \rightarrow \text{host interface (eth0)} \rightarrow \text{internet}
$$

The host uses iptables MASQUERADE rules so outbound packets have the host’s IP as source; return traffic is de-NATed and forwarded back.

Advantages

Disadvantages

Port publishing

To expose a container port externally, you map host ports:

docker run -p 8080:80 nginx

Host Networking

With host networking, the container shares the host’s network namespace.

Example (Docker):

docker run --net=host nginx

Characteristics

Advantages

Disadvantages

Use host networking when performance is critical and you accept weaker isolation (e.g. some monitoring agents, network appliances).

None / Isolated Networking

--net=none (or equivalent) gives the container a network namespace but no configured interfaces (except loopback).

Characteristics:

Example:

docker run --net=none myimage

From there, you can use ip and nsenter or CNI plugins to attach custom interfaces.

Overlay Networks

Overlay networks allow containers running on different hosts to communicate as if they were on the same L2/L3 network, abstracting away the physical network.

These are fundamental in multi-host environments (Docker Swarm, Kubernetes, and other orchestrators).

Concept

Typical data path for container A → container B on another host:

  1. Container A sends to B’s overlay IP.
  2. Host A encapsulates packet into VXLAN/UDP.
  3. Packets travel across underlay network to Host B.
  4. Host B decapsulates and delivers packet to container B.

Advantages

Disadvantages

Overlay networks are heavily used in Docker Swarm (docker network create --driver overlay) and Kubernetes via CNI plugins that may use VXLAN or similar.

Macvlan and Ipvlan

Macvlan and ipvlan give containers IPs directly on the underlay network, making them appear as separate hosts at L2 or L3.

Macvlan basics

ip link add link eth0 name macvlan0 type macvlan mode bridge

Advantages:

Disadvantages:

Ipvlan basics

You’d choose macvlan/ipvlan when you:

Docker Networking Overview (As a Practical Example)

Docker exposes several network drivers; these mostly build on Linux primitives described above.

Listing and inspecting networks

docker network ls
docker network inspect bridge

Common drivers:

Creating custom bridge networks

Custom bridges give better isolation and control:

docker network create \
  --driver bridge \
  --subnet 172.20.0.0/16 \
  mynet
docker run --network mynet --name app1 -d nginx

Containers on mynet:

Overlay networking in Docker Swarm

In Swarm mode:

docker network create --driver overlay --attachable myoverlay
docker service create --name web --network myoverlay nginx

Kubernetes and CNI Networking (Conceptual)

Kubernetes (and other systems) rely on the Container Network Interface (CNI) standard. Each node:

Important conceptual points:

As an operator, you typically:

DNS and Service Discovery in Container Environments

Networking is not only about packets; naming and discovery are critical.

Basic container DNS (single host)

Service discovery in orchestrated systems

You will rely heavily on these features instead of manually tracking container IPs.

Security and Isolation Considerations

Container networking interacts with host firewalling, routing, and kernel features. Some key points:

Best practices:

Debugging and Observing Container Networking

You’ll often need to inspect and debug connectivity issues.

Useful techniques:

On the host

  ip netns list        # if your tooling uses named netns
  ip link show
  ip addr show
  bridge link
  iptables -t nat -L -n -v
  iptables -t filter -L -n -v

From inside containers

Install or use tools like iproute2, ping, traceroute, curl, dig.

Check:

Using host tools to inspect containers

  docker exec -it mycontainer sh
  nsenter --target <pid> --net ip addr

Combine these with packet capture tools (tcpdump, wireshark) to see actual traffic on bridges, veths, or physical interfaces.

Choosing a Container Networking Approach

Your choice depends on environment and needs:

Understanding the underlying Linux primitives (namespaces, veth, bridges, iptables) will make all higher-level systems (Docker, Kubernetes, LXC, Podman) clearer and easier to operate.

Views: 75

Comments

Please login to add a comment.

Don't have an account? Register now!