Table of Contents
Understanding Container Networking Models
Container networking is about how containers reach each other, the host, and the outside world. Different runtimes and orchestrators implement different models, but the core patterns are shared.
The main container networking models you’ll encounter:
- Bridge (NAT) networking
- Host networking
- None (isolated) networking
- Overlay networks
- Macvlan / ipvlan
- CNI plugin–based networking (Kubernetes and others)
Each has trade-offs in simplicity, performance, isolation, and routability.
Network Namespaces and Virtual Interfaces (Quick Context)
Containers usually live in their own network namespace. Each namespace has its own:
- Network interfaces
- Routing table
- ARP table
- iptables rules (optionally, via separate netfilter namespaces)
To connect containers to the host and outside networks, Linux uses virtual Ethernet pairs (veth) and bridge interfaces:
- A
vethpair acts like a virtual cable: packets entering one end appear on the other. - A bridge (
br0,docker0, etc.) behaves like a virtual switch.
You will see these when inspecting container networks with tools like ip addr, ip link, and brctl or bridge.
Bridge / NAT Networking
Bridge networking is the default for many container engines (e.g., Docker’s bridge network).
How it works
- The runtime creates a Linux bridge on the host, e.g.
docker0. - The bridge is assigned a private subnet, e.g.
172.17.0.0/16. - Each container gets:
- Its own network namespace
- A
vethinterface inside the container (e.g.eth0) - The other end of the
vethattached to the bridge in the host namespace - The host performs NAT (masquerading) for containers to reach the outside world.
Typical outbound traffic flow:
$$
\text{container} \rightarrow veth\_c \leftrightarrow veth\_h \rightarrow \text{bridge (docker0)} \rightarrow \text{host interface (eth0)} \rightarrow \text{internet}
$$
The host uses iptables MASQUERADE rules so outbound packets have the host’s IP as source; return traffic is de-NATed and forwarded back.
Advantages
- Simple and automatic.
- Containers can reach each other via container IPs on the bridge.
- Provides basic isolation (containers are not directly exposed to external networks).
- Works well on a single host.
Disadvantages
- Extra NAT hop adds some overhead.
- Container IPs are often not reachable directly from outside the host without port mapping.
- Less transparent routing: external systems see host IP, not container IP.
Port publishing
To expose a container port externally, you map host ports:
docker run -p 8080:80 nginx- Host
:8080→ container:80 - Implemented with iptables DNAT rules from host interface to container IP/port.
Host Networking
With host networking, the container shares the host’s network namespace.
Example (Docker):
docker run --net=host nginxCharacteristics
- No separate container IP on a bridge; the container uses the host’s interfaces.
- Services inside the container bind directly to host IPs and ports.
- No container-level NAT or bridge involved.
Advantages
- Very low overhead; close to bare-metal network performance.
- No port mapping needed; just bind to normal host ports.
- Sometimes required for software that expects full control of networking.
Disadvantages
- Weaker isolation: container can see and interact with all host interfaces.
- Port conflicts with host services.
- Harder to separate traffic or apply container-specific firewall rules.
Use host networking when performance is critical and you accept weaker isolation (e.g. some monitoring agents, network appliances).
None / Isolated Networking
--net=none (or equivalent) gives the container a network namespace but no configured interfaces (except loopback).
Characteristics:
- No default route, DNS, or external connectivity.
- Often used when:
- You want full manual control (e.g. custom
ipandiptablessetup). - The container is wired into a separate SDN or specialized plugin.
Example:
docker run --net=none myimage
From there, you can use ip and nsenter or CNI plugins to attach custom interfaces.
Overlay Networks
Overlay networks allow containers running on different hosts to communicate as if they were on the same L2/L3 network, abstracting away the physical network.
These are fundamental in multi-host environments (Docker Swarm, Kubernetes, and other orchestrators).
Concept
- Create a virtual network that spans multiple hosts.
- Each container gets an IP in an overlay subnet, e.g.
10.0.0.0/16. - The overlay uses encapsulation (e.g. VXLAN: packets are wrapped in UDP) to transport traffic between hosts.
- The underlay network only needs IP connectivity between the hosts (no container-awareness).
Typical data path for container A → container B on another host:
- Container A sends to B’s overlay IP.
- Host A encapsulates packet into VXLAN/UDP.
- Packets travel across underlay network to Host B.
- Host B decapsulates and delivers packet to container B.
Advantages
- Transparent inter-host networking for containers.
- Isolates traffic using separate overlay networks.
- Works over existing infrastructure without special switch support.
Disadvantages
- Encapsulation overhead (extra headers, CPU cost).
- Additional complexity (control plane to maintain endpoint mappings).
- Debugging can be harder (more layers to inspect).
Overlay networks are heavily used in Docker Swarm (docker network create --driver overlay) and Kubernetes via CNI plugins that may use VXLAN or similar.
Macvlan and Ipvlan
Macvlan and ipvlan give containers IPs directly on the underlay network, making them appear as separate hosts at L2 or L3.
Macvlan basics
- Each container gets its own MAC address and IP on the physical network.
- A macvlan sub-interface is created on top of a physical NIC, e.g.:
ip link add link eth0 name macvlan0 type macvlan mode bridge- Containers attach to
macvlan0and receive IPs from the same subnet as the host (via DHCP or static config).
Advantages:
- No NAT: containers are directly reachable by other devices on the LAN.
- Good for legacy systems that require peers with real IPs.
- Can simplify firewall rules (treat containers like distinct hosts).
Disadvantages:
- Some switches / NIC drivers may dislike many MACs on one port or require configuration.
- By default, host ↔ container communication via the same NIC can be restricted; workarounds involve an extra interface or routing tricks.
Ipvlan basics
- Multiple IP addresses share a single MAC address, reducing switch MAC table pressure.
- Similar goals to macvlan (container IPs in the underlay), but different at L2.
You’d choose macvlan/ipvlan when you:
- Need containers to be first-class citizens on the LAN.
- Want to avoid NAT or overlay encapsulation in small to medium deployments.
Docker Networking Overview (As a Practical Example)
Docker exposes several network drivers; these mostly build on Linux primitives described above.
Listing and inspecting networks
docker network ls
docker network inspect bridgeCommon drivers:
bridge: default single-host virtual network (NAT via iptables).host: share host’s network namespace.none: isolated; no network config.overlay: multi-host networks in Swarm mode.macvlan: attach containers directly to the physical network.
Creating custom bridge networks
Custom bridges give better isolation and control:
docker network create \
--driver bridge \
--subnet 172.20.0.0/16 \
mynet
docker run --network mynet --name app1 -d nginx
Containers on mynet:
- Use IPs from
172.20.0.0/16. - Can resolve each other by container name using Docker’s internal DNS.
- Are isolated from the default
bridgenetwork unless explicit routing is configured.
Overlay networking in Docker Swarm
In Swarm mode:
docker network create --driver overlay --attachable myoverlay
docker service create --name web --network myoverlay nginxmyoverlayspans all Swarm nodes.- Each task (container) gets an IP in the overlay network.
- Swarm’s control plane maintains a distributed routing mesh.
Kubernetes and CNI Networking (Conceptual)
Kubernetes (and other systems) rely on the Container Network Interface (CNI) standard. Each node:
- Uses a CNI plugin to create and configure interfaces in the Pod’s network namespace.
- Ensures that every Pod IP is directly routable from every other Pod (the “flat” cluster network model).
Important conceptual points:
- No mandatory NAT for Pod-to-Pod traffic inside the cluster.
- CNI plugins implement the chosen data plane:
- Overlay (VXLAN, IP-in-IP, etc.): Flannel, Calico, Weave Net.
- Pure routed: Calico in BGP mode, Cilium, etc.
- Each Pod gets:
- An interface (e.g.
eth0) in its namespace. - An IP from a Pod CIDR assigned to the node.
As an operator, you typically:
- Choose a CNI plugin based on your environment and requirements.
- Understand how Pod IPs map to node IPs and routes.
- Use
kubectl get pods -o wideandkubectl describe nodeto inspect IP assignments and connectivity.
DNS and Service Discovery in Container Environments
Networking is not only about packets; naming and discovery are critical.
Basic container DNS (single host)
- Docker bridge networks provide an embedded DNS server:
- Container names become hostnames (
ping app1). - DNS entries are updated as containers join/leave.
Service discovery in orchestrated systems
- Docker Swarm:
- Services get virtual IPs.
- Built-in load balancing across tasks.
- Kubernetes:
- Each Service gets:
- A stable cluster IP.
- A DNS name (
service.namespace.svc.cluster.local). - DNS (CoreDNS) resolves these names to Service IPs.
You will rely heavily on these features instead of manually tracking container IPs.
Security and Isolation Considerations
Container networking interacts with host firewalling, routing, and kernel features. Some key points:
- iptables / nftables:
- NAT rules for bridge networks.
- Filtering rules to control inbound/outbound container traffic.
- Network Policies (Kubernetes, via CNI):
- Control what Pods can talk to what, at L3/L4.
- Isolation boundaries:
- Network namespaces isolate routing/addresses.
- Capabilities and seccomp affect what network operations containers may perform.
- Exposed surfaces:
- Every published port (
-p host:container) is an additional attack surface. - Macvlan/ipvlan expose containers more like full hosts on the LAN.
Best practices:
- Only expose ports that need to be reachable externally.
- Use dedicated networks (custom bridges, overlays) for different application tiers.
- Use firewalls/network policies to restrict lateral movement between containers.
Debugging and Observing Container Networking
You’ll often need to inspect and debug connectivity issues.
Useful techniques:
On the host
- List namespaces:
ip netns list # if your tooling uses named netns- List bridges and links:
ip link show
ip addr show
bridge link- Inspect iptables rules:
iptables -t nat -L -n -v
iptables -t filter -L -n -vFrom inside containers
Install or use tools like iproute2, ping, traceroute, curl, dig.
Check:
- IP configuration:
ip addr,ip route - DNS resolution:
cat /etc/resolv.conf,dig/nslookup - Connectivity:
ping,curl,nc
Using host tools to inspect containers
- Docker:
docker exec -it mycontainer sh- Enter a network namespace directly (container engines usually expose the PID):
nsenter --target <pid> --net ip addr
Combine these with packet capture tools (tcpdump, wireshark) to see actual traffic on bridges, veths, or physical interfaces.
Choosing a Container Networking Approach
Your choice depends on environment and needs:
- Single host, simple deployments:
- Bridge networking with port publishing is usually enough.
- Multi-host, orchestrated clusters:
- Overlay or routed CNI-based networking is typical (Kubernetes, Swarm).
- Need direct LAN presence for containers:
- Consider macvlan/ipvlan.
- Performance-critical, trust the container:
- Host networking may be appropriate.
Understanding the underlying Linux primitives (namespaces, veth, bridges, iptables) will make all higher-level systems (Docker, Kubernetes, LXC, Podman) clearer and easier to operate.