Table of Contents
Why Cloud Networking Is Different
Traditional networking is about physical machines and static networks. In the cloud, most networking is:
- Software-defined (configured via APIs/console/CLI instead of cables)
- Highly dynamic (instances, IPs, and network paths change frequently)
- Tightly integrated with identity, policies, and automation
Across major providers (AWS, Azure, GCP), core ideas are similar even if names differ. This chapter focuses on those shared concepts and how they affect Linux systems running in the cloud.
Virtual Private Clouds / Virtual Networks
Each cloud gives you logically isolated networks for your resources:
- AWS: VPC (Virtual Private Cloud)
- Azure: VNet (Virtual Network)
- GCP: VPC Network
Key ideas:
- You get a private IP space using RFC1918 ranges, e.g.
10.0.0.0/16,172.16.0.0/16,192.168.0.0/16. - The provider’s fabric routes traffic inside this space; no physical routers required.
- You can have multiple VPCs/VNets and connect them selectively.
From a Linux instance’s perspective, this looks like a normal LAN:
- It has a private IP
- It uses a default gateway (managed by the cloud)
- It uses DHCP/DNS (often also cloud-managed)
You still use familiar tools (ip, ss, ping, etc.) to inspect and debug, but the topology and routing are controlled by cloud configuration instead of on-box config files alone.
Subnets and IP Addressing in the Cloud
Each VPC/VNet is split into subnets. Subnets define:
- IP ranges (e.g.
10.0.1.0/24,10.0.2.0/24) - Availability zone / region placement
- Whether instances can have direct internet access (public vs private)
Typical patterns:
- Public subnet
- Instances can receive traffic from the internet via public IPs.
- Usually host load balancers, bastion/jump hosts, or public-facing services.
- Private subnet
- No direct inbound connectivity from the internet.
- Used for application servers, databases, and internal services.
- Outbound internet traffic often flows via NAT.
Linux implications:
- Your instance will see one or more interfaces (
eth0,ens5, etc.) with private IPs from the subnet range. - Additional private IPs can be attached to the same interface (for multi-service or multi-tenant setups).
- You rarely edit
/etc/network/interfacesorNetworkManagerfor cloud-provided interfaces; the cloud or its agent usually manages them.
Public vs Private IPs and NAT
Cloud networking generally separates:
- Private IPs (internal-only, routable within VPC/VNet and connected networks)
- Public IPs (globally reachable addresses, often mapped to a private IP)
There are two common mappings:
- One-to-one public IP assignment
- Instance has:
- Private IP on its interface (e.g.
10.0.1.10) - Public IP associated in the cloud control plane (e.g.
203.0.113.5) - Cloud NAT translates public ↔ private, but the Linux system normally only sees the private IP.
- NAT gateway for many-to-one outbound
- Private instances use a shared NAT gateway to access the internet.
- Outbound connections appear from a single public IP or a small pool.
- Inbound connections from the internet are not allowed directly.
Linux view:
ip awill usually show only private addresses.curl ifconfig.me(or similar) shows the public IP used for outbound NAT.- You don’t configure NAT rules on the instance for basic connectivity; the cloud manages this at the network level.
Route Tables and Default Gateways
Each subnet is associated with a route table describing where traffic goes:
10.0.0.0/16→local(internal VPC routing)0.0.0.0/0→ internet gateway or NAT gateway- Specific CIDRs → VPN or peering connections
On your Linux instance:
ip route(orip r) shows a default route (e.g.default via 10.0.1.1 dev eth0), where10.0.1.1is a virtual gateway managed by the cloud.- You rarely modify these default routes directly; instead you change the VPC route table via the cloud console/CLI and the instance’s effective routing updates automatically.
This separation is important:
- OS routing table: Last hop from the instance to the cloud-provided gateway.
- Cloud route table: Where the cloud sends packets after the gateway (internet, other subnets, VPN, etc.).
Security Groups and Network ACLs
Cloud networking has two main layers of traffic control:
Security Groups / Network Security Groups
These are stateful, instance-level firewalls:
- Attached to network interfaces or instances.
- Rules are usually expressed as:
- Allow inbound: protocol, port(s), source (CIDR or another security group)
- Allow outbound: protocol, port(s), destination
- Stateful: if you allow outbound TCP to a destination, the return traffic is automatically allowed.
Providers call them:
- AWS: Security Groups (SGs)
- Azure: Network Security Groups (NSGs)
- GCP: VPC firewall rules (conceptually similar)
Linux relationship:
- They act like an external firewall in front of
iptables/nftables. - Your Linux firewall still works, but traffic can be blocked before it even reaches the instance.
- When debugging, always check both:
- Cloud-level rules (SG/NSG)
- Linux-level rules (
iptables -L,nft list ruleset, etc.)
Network ACLs (NACLs) and Subnet-level Rules
Many clouds also provide stateless ACLs at the subnet level:
- Rules apply to all instances in a subnet.
- Stateless: you must explicitly allow both inbound and outbound directions.
- Commonly used as an extra safety layer (like a coarse-grained firewall).
For most basic setups, security groups/NSGs are the primary tool; NACLs are optional.
Load Balancers
Cloud load balancers are managed services that distribute traffic across multiple instances:
Common types:
- Layer 4 (TCP/UDP) load balancers
- Route based on IP:port.
- Provide basic health checks and distribution.
- Layer 7 (HTTP/HTTPS) load balancers
- Understand HTTP(S).
- Can terminate TLS, route by host/path, add/remove headers, etc.
From a Linux server’s perspective:
- The load balancer appears as the client IP (or may pass the original IP in headers like
X-Forwarded-For). - Health checks (e.g. HTTP GET
/healthor TCP connect on a port) must succeed; otherwise instances are removed from rotation. - You do not install load balancer software for these managed services – you just configure targets and health checks in the cloud.
For networking:
- Instances often stay in private subnets; the load balancer has a public IP.
- Security groups/NSGs typically allow inbound only from the load balancer on service ports.
Private Connectivity: VPN, Direct Links, and Peering
Cloud networks rarely live in isolation. Common connectivity options:
Site-to-Site VPN
- Encrypted tunnel between your on-premise network and the cloud VPC/VNet.
- From Linux’s perspective:
- Remote on-prem networks appear as normal routes in the VPC route table.
- Instances talk to on-prem IPs using private addressing; the cloud VPN gateway handles encryption and tunneling.
Direct/Dedicated Links
- Physical or virtual dedicated connections between your datacenter and the cloud provider’s network.
- Lower latency, more stable bandwidth than internet VPN.
- Often combined with VPN for encryption.
VPC/VNet Peering
- Connects two VPCs/VNets (same or different accounts/projects).
- Traffic is routed privately over the provider’s backbone.
- No NAT required; IP ranges must not overlap.
For Linux admins:
- Connectivity issues often come from missing routes or misconfigured security groups rather than per-host problems.
- Use tools like
traceroute,mtr, andpingplus cloud route/peering inspection to troubleshoot.
DNS in the Cloud
Each VPC/VNet usually has internal DNS:
- Instances automatically receive:
- An internal DNS server IP (through DHCP)
- A search domain (e.g.
ec2.internal,internal.cloudapp.net,c.project.internal)
Key behaviors:
- Hostnames that your Linux instance sees in
/etc/hostnameorhostnameoutput often map to internal DNS entries. - Internal DNS names are stable even if public IPs change (for certain instance types or load balancers).
- Managed DNS services (e.g. Route 53, Azure DNS, Cloud DNS) provide global records that can point to:
- Public load balancers
- Private IPs inside VPCs
- Hybrid on-prem + cloud names
Linux integration:
- Resolver configuration is usually managed automatically (
/etc/resolv.confmay be managed bysystemd-resolved,NetworkManager, or a cloud agent). - Applications can use hostnames instead of hardcoding IPs to be resilient to changes in underlying infrastructure.
Basic Connectivity Checks on a Cloud Linux Instance
When something doesn’t work, combine standard Linux tools with cloud concepts:
- Check interface and IP
ip a- Is there an IP from the expected subnet?
- Check default route
ip route- Is there a default route via the VPC gateway?
- Check DNS
cat /etc/resolv.conforresolvectl statusdig example.comornslookup example.com- Check local firewall
sudo iptables -L -norsudo nft list ruleset- Check cloud-side rules
- Security groups/NSGs
- NACLs (if used)
- Load balancer target status / health checks
Always remember: a connection can be blocked at multiple layers (instance firewall, cloud firewall, routing, load balancer, NAT).
Common Cloud Networking Patterns
Some standard architectures you’ll see repeatedly:
- Public web tier, private app/db tier
- Web servers behind a public load balancer in public subnets.
- App and DB instances in private subnets, accessible only from web/app tiers.
- Outbound internet for private instances via NAT.
- Bastion host (jump box)
- Single hardened SSH host in a public subnet.
- All other instances in private subnets only allow SSH from the bastion’s security group.
- Used to administer private resources without opening many public SSH endpoints.
- Hub-and-spoke
- Central “hub” VPC/VNet with shared services (VPN, inspection, logging).
- Multiple “spoke” VPCs/VNets for different applications or teams, peered to the hub.
- Route tables direct traffic between spokes via the hub.
When working with Linux in these designs, you mostly:
- Ensure services bind to the right IPs/interfaces.
- Open the correct ports locally.
- Coordinate with cloud networking settings (subnets, routes, SGs/NSGs, load balancers).
How This Ties into DevOps and Automation
Cloud networking is fully API-driven. For DevOps workflows:
- You define VPCs, subnets, route tables, security groups, and load balancers as code (e.g. Terraform, CloudFormation).
- CI/CD pipelines can:
- Create temporary networks for tests.
- Deploy Linux-based services into specific subnets with appropriate security rules.
- Configuration management (Ansible, etc.) handles on-instance firewall rules and service ports to match cloud networking policies.
Understanding the basics above lets you:
- Reason about how your Linux instances connect to the internet and each other.
- Debug failures that are not caused by the OS itself, but by cloud networking configuration.
- Safely expose or isolate services according to security and reliability requirements.