Table of Contents
Understanding Services in OpenShift
In OpenShift, Services provide stable, virtual endpoints for accessing groups of pods. Pods are ephemeral and their IPs change; Services give you a consistent name and IP to reach them, and form the basis of in-cluster service discovery.
At a high level, a Service:
- Selects a set of pods using labels.
- Exposes those pods on a stable virtual IP (ClusterIP) and port.
- Optionally exposes them outside the cluster (depending on
type). - Participates in DNS-based service discovery.
OpenShift builds on standard Kubernetes Services; everything here applies to Kubernetes, with a few OpenShift-specific details and defaults.
Core Service Types
ClusterIP (default)
- Accessible only from inside the cluster.
- Gets a virtual IP (cluster-internal) and DNS entry.
- This is the default
typewhen you create a Service.
Example:
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
selector:
app: myapp
tier: frontend
ports:
- port: 80 # Service port
targetPort: 8080 # Pod containerPort
protocol: TCP
type: ClusterIPKey points:
selectorchooses which pods are behind the Service.portis what clients use to connect to the Service.targetPortis where traffic is forwarded on the pod.- If
targetPortis omitted, it defaults to the same value asport.
NodePort
- Exposes the Service on a port on every node’s network interface.
- Usually used as a building block for higher-level load balancers.
- In OpenShift, external access is more commonly done via Routes, but NodePort is still available.
Basic example:
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 30080- Clients can access the Service using
NODE_IP:nodePort. - NodePort range is controlled at the cluster level (commonly 30000–32767).
LoadBalancer
- Asks the underlying infrastructure (cloud provider, external integration) to create an external load balancer.
- In many OpenShift deployments, external access is instead handled via Routes and Ingress, but
type: LoadBalanceris still supported where the infrastructure allows it.
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080The Service gets an external IP or hostname from the infrastructure, and traffic is distributed to the backing pods.
ExternalName
- A special Service that maps a name inside the cluster to an external DNS name.
- No cluster IP or proxying; it’s purely DNS-level redirection.
apiVersion: v1
kind: Service
metadata:
name: external-db
spec:
type: ExternalName
externalName: db.example.com
Inside the namespace, accessing external-db is equivalent to using db.example.com.
Service Selectors and Endpoints
Services select pods using spec.selector. This label-based selection is how OpenShift decouples application identity from specific pod instances.
Example:
spec:
selector:
app: payments
role: api
Any pod with labels app=payments and role=api becomes a backend for this Service.
When a Service has a selector:
- The cluster automatically creates an
Endpoints(orEndpointSlice) resource. - It lists the pod IPs and ports that are currently backing the Service.
- As pods are added/removed, this list updates dynamically.
You can inspect endpoints:
oc get endpoints frontend
If you omit selector, you can manually define endpoints. This is less common but useful for services that point to:
- External IPs not directly managed by Kubernetes.
- Specialized backends where you explicitly control the endpoint list.
Example Service without selector:
apiVersion: v1
kind: Service
metadata:
name: legacy-backend
spec:
ports:
- port: 5432
protocol: TCP
apiVersion: v1
kind: Endpoints
metadata:
name: legacy-backend
subsets:
- addresses:
- ip: 10.0.0.10
- ip: 10.0.0.11
ports:
- port: 5432Service Ports and Target Ports
The port mapping in a Service is important for clean separation between client-facing and container-facing ports.
Fields:
port: The port on which the Service is exposed.targetPort: The port on the pod/container.name: Optional but strongly recommended when multiple ports exist.protocol: TypicallyTCPorUDP(most OpenShift traffic is TCP).
Named targetPort:
You can use container port names instead of numbers, which is useful when containers expose multiple ports:
containers:
- name: web
image: myimage
ports:
- name: http
containerPort: 8080
- name: metrics
containerPort: 9090
kind: Service
spec:
selector:
app: myapp
ports:
- name: web
port: 80
targetPort: http
- name: metrics
port: 9100
targetPort: metricsUsing names makes your Service configuration independent of exact port numbers inside the containers.
Service Discovery in OpenShift
Service discovery is how applications locate and communicate with other services inside the cluster. In OpenShift, this is primarily handled via cluster DNS.
DNS Names for Services
OpenShift deploys a DNS server that automatically creates DNS entries for Services and pods.
A typical Service DNS name within a namespace looks like:
- Short name:
service-name - Fully qualified within the cluster:
service-name.namespace.svcservice-name.namespace.svc.cluster.local(cluster domain is configurable)
Examples:
- Service
dbin projectpayments: db(withinpaymentsnamespace)db.payments.svcdb.payments.svc.cluster.local
Applications usually only need the short name if they communicate within the same namespace.
Service Discovery via Environment Variables
When a pod starts, Kubernetes injects environment variables for existing Services in its namespace. For a Service frontend on port 80, you might see:
FRONTEND_SERVICE_HOST— the Service cluster IP.FRONTEND_SERVICE_PORT— the Service port.FRONTEND_PORTandFRONTEND_PORT_80_TCP, etc.
This mechanism:
- Only captures Services that exist when the pod is created.
- Does not update if new Services appear later.
- Is legacy but still works and can be useful for simple apps.
DNS-based discovery is preferred because it is dynamic and does not require pod restarts when Services change.
Discovering Services from an Application
Common patterns from inside a pod:
- Using hostnames:
- Same namespace:
http://backend:8080 - Different namespace:
http://backend.api-namespace.svc:8080 - Using environment variables (if appropriate) to configure endpoints:
DB_HOST=$DB_SERVICE_HOSTDB_PORT=$DB_SERVICE_PORT
Languages and frameworks often have built-in DNS resolution, so using Service names directly is typically straightforward.
Headless Services and Stateful Service Discovery
Headless Services are a special type of Service that do not get a cluster IP. Instead, DNS returns the IPs of individual pods directly.
Configuration:
apiVersion: v1
kind: Service
metadata:
name: db-headless
spec:
clusterIP: None
selector:
app: db
ports:
- port: 5432Behavior:
- DNS queries for
db-headlessreturn a list of pod IPs. - With StatefulSets, each pod gets its own DNS name like:
pod-0.db-headless.namespace.svc.cluster.localpod-1.db-headless.namespace.svc.cluster.local
This is crucial for applications that require:
- Direct addressability of each replica.
- Stable identity per pod.
- Custom client-side load balancing or leader election.
You will encounter headless Services mainly when working with stateful or clustered applications.
Service Traffic Routing and Load Balancing (Conceptual Level)
Without going into the networking implementation details:
- When a client connects to a Service IP and port, traffic is distributed across matching pods.
- The distribution algorithm (e.g., round-robin) is handled by the cluster’s networking layer.
- OpenShift’s SDN/OVN-Kubernetes implementation ensures that:
- Pods can reach Service IPs.
- Nodes forward external traffic to Services/pods as configured.
Key behaviors to understand:
- If no pods match the Service’s selector, connections typically fail or hang, depending on the protocol.
- If pods are not ready (based on readiness probes), they should not receive traffic.
For HTTP-based apps, you often combine:
- Services (for stable pod access).
- Routes or Ingress (for external HTTP(S) access — covered elsewhere).
Services and Namespaces
Services are namespaced resources:
- A Service name must be unique within a namespace.
- Different namespaces can have Services with the same name.
Cross-namespace access uses fully qualified names:
http://service-name.other-namespace.svc- Network policies (discussed elsewhere) may allow or restrict this.
Best practices:
- Keep logically related microservices in the same namespace when possible.
- Use explicit FQDNs when calling across namespaces to avoid ambiguity.
Practical Workflow with Services in OpenShift
Typical steps to expose an internal application:
- Deploy an application (e.g., Deployment, DeploymentConfig).
- Ensure pods are labeled appropriately (e.g.,
app=myapp). - Create a Service:
oc expose deployment myapp --port=80 --target-port=8080This automatically creates a Service with a selector matching the deployment’s pods.
- Verify:
oc get svc
oc describe svc myapp- Connect from another pod:
- Use
http://myapp:80(same namespace). - Optionally, fully qualify if needed.
OpenShift adds convenience commands:
oc expose— quickly create Services from deployments or other resources.oc get svc -o wide— show more details, including cluster IP and ports.
Common Pitfalls and Considerations
- Incorrect selectors:
- If your
selectordoes not match any pods, the Service will exist but route to nothing. - Always check
oc get endpoints <service>to confirm there are backing pods. - Port mismatches:
- Ensure
targetPortmatches the container’s listening port. - Use named ports to reduce confusion.
- Relying solely on environment variables:
- New Services are not visible to already-running pods via env vars.
- Prefer DNS for dynamic environments.
- Namespace confusion:
backendin namespacedevandbackendin namespaceprodare different Services.- Be explicit with FQDNs when crossing namespaces.
Understanding these aspects of Services and service discovery will make it easier to design, deploy, and debug microservices-based applications on OpenShift, and to integrate them cleanly with the rest of the cluster networking stack.