Table of Contents
Big Picture: How OpenShift Relates to Kubernetes
OpenShift is a Kubernetes platform, not a competitor to Kubernetes.
- Kubernetes is the upstream, open-source container orchestration project.
- OpenShift is a productized, opinionated, enterprise distribution of Kubernetes with additional components, defaults, and tooling.
You can think of it like this:
- Kubernetes = kernel + core OS utilities
- OpenShift = “full Linux distribution” built around that kernel, plus management tools, security hardening, and a support model.
So OpenShift includes Kubernetes and adds:
- A specific, tested Kubernetes version
- A default networking, storage, and security stack
- Built‑in tools for builds, CI/CD, image management, and more
- A web console and CLI tailored to the platform
- Enterprise support and lifecycle guarantees
Upstream vs Distribution
Kubernetes: The Upstream Project
Kubernetes is:
- A community-driven project hosted by the CNCF.
- Released on a fast cadence (roughly every few months).
- Packaged and shipped by many vendors as different Kubernetes distributions.
On its own, Kubernetes provides:
- APIs and controllers to manage pods, services, etc.
- A reference model, not a single “official” packaged product.
To build a usable platform from raw Kubernetes, an operator must decide:
- Which CNI plugin to use
- How to implement ingress, logging, monitoring, registry, authentication
- How to handle upgrades, backups, multi-tenancy, quotas, and more
OpenShift: A Kubernetes Distribution
OpenShift is:
- A Red Hat–maintained Kubernetes distribution.
- Based on a specific Kubernetes version, with:
- Integrated components (networking, registry, logging, monitoring, Operators)
- Tighter defaults and policies (especially around security and multi-tenancy)
- An opinionated installation, upgrade, and lifecycle story
OpenShift tracks upstream Kubernetes closely, but:
- Features are brought in once they are evaluated, integrated, and tested.
- Some components or APIs might be:
- Marked as Technology Preview before full support
- Hidden or disabled until they meet support criteria
Platform Scope: Bare Kubernetes vs OpenShift
What You Get with “Just Kubernetes”
With upstream Kubernetes (e.g., from kubeadm or a minimal distribution), you often need to assemble:
- Networking (CNI: Calico, Flannel, Cilium, etc.)
- Ingress (NGINX ingress controller, HAProxy, or similar)
- Container registry (Harbor, Docker Registry, or cloud vendor)
- Monitoring and logging (Prometheus, Grafana, EFK/ELK, Loki, etc.)
- Authentication/authorization integration
- Upgrade and cluster lifecycle tooling
The result can be very flexible but requires:
- More design work and integration effort
- Continuous maintenance by a platform or SRE team
What OpenShift Adds on Top
OpenShift ships as a more complete platform:
- Distribution choice made for you:
- Built-in SDN networking and routing layer
- Default ingress / edge routing with Routes
- Integrated registry for container images
- Built-in monitoring and logging stack (platform-level)
- Operator framework to manage platform and add-ons
- Cluster lifecycle management (install, upgrade, day‑2 ops)
- Web console tailored to developers and operators
- Source-to-Image and other build/deploy workflows
The trade-off:
- Less time spent “building a platform”
- More time spent using the platform, but within certain conventions and guardrails
Security and Multi-Tenancy Differences
Default Security Posture
Vanilla Kubernetes:
- Is relatively permissive by default; many distributions let containers:
- Run as root
- Use broader capabilities
- Leaves security policies and enforcement largely to the cluster operator.
OpenShift:
- Is secure by default:
- Enforces stricter Security Context Constraints (SCCs)
- Uses randomized, non-root UIDs by default for application containers
- Often requires:
- Container images to be non-root compatible
- Proper use of file permissions and security contexts
Impact for users:
- Some images that “just work” on a relaxed Kubernetes cluster may fail on OpenShift until they’re hardened.
- In exchange, you get:
- Better multi-tenant isolation
- Reduced risk from privileged containers or misconfigured workloads
Multi-Tenancy and Quotas
Kubernetes supports:
- Namespaces, Role-Based Access Control (RBAC), and ResourceQuota.
- But design and enforcement of multi-tenant models is largely left to you.
OpenShift builds multi-tenancy into:
- Projects (namespaces plus additional metadata and policies)
- Quota and limit-range integration as a normal part of project setup
- Predefined cluster roles and bindings aimed at:
- Cluster admins
- Project admins
- Developers
The result is a more opinionated multi-tenant experience, designed for many teams sharing a single cluster.
Developer Experience: Kubernetes vs OpenShift
APIs Are Mostly the Same
At the core:
- Pods, Deployments, Services, ConfigMaps, Secrets, etc. are Kubernetes-native on OpenShift.
- YAML manifests for these resources are usually:
- Portable between Kubernetes and OpenShift, with minor differences in:
- SecurityContext
- Ingress vs Route configuration
- Storage classes and annotations
OpenShift adds its own resources (e.g., Routes, some Operator CRDs), but they are built on top of Kubernetes APIs.
Tooling: kubectl vs oc
- Kubernetes uses
kubectlas the standard CLI. - OpenShift uses
oc: occan run mostkubectl-like commands (oc get pods, etc.).- It also includes OpenShift-specific commands, for example:
oc new-appto quickly create applications from images or source- Project and user-focused commands that understand OpenShift’s model
In practice:
- OpenShift users interact with both standard Kubernetes APIs and OpenShift-specific enhancements.
Web Console vs Raw Kubernetes UI
Kubernetes itself only provides a basic dashboard in some distributions, and it is often:
- Optional
- Not enabled by default
- Not heavily integrated with auth/audit in many setups
OpenShift provides a full web console that:
- Exposes Kubernetes objects and OpenShift-specific resources
- Offers:
- Developer views (applications, topology, pipelines)
- Administrator views (nodes, Operators, cluster settings)
- Is tightly integrated with cluster authentication and RBAC
From a user perspective, OpenShift feels more like a managed platform, not just a cluster API.
Application Exposure: Ingress vs Routes
Both platforms can expose applications externally, but the abstractions differ.
Kubernetes:
- Uses Ingress resources (and Ingress controllers) to:
- Map hostnames and paths to Services
- Details of behavior depend on the Ingress controller implementation.
OpenShift:
- Supports Ingress, but also introduces Routes:
- A
Routeis an OpenShift-specific resource that: - Exposes a Service externally
- Encapsulates hostnames, TLS termination, and routing policies
- Provides a built-in routing layer:
- Typically an HAProxy-based router managed by the platform
Implications:
- On a plain Kubernetes cluster, teams may choose among many Ingress controllers.
- On OpenShift, you get a standard, integrated approach through Routes (with Ingress available as well).
Integrated Platform Services and Operators
Operators and Extensibility
Kubernetes supports:
- Custom Resource Definitions (CRDs) and controllers, which form the basis of the Operator pattern.
- Many Operators are installed manually or via third-party catalogs.
OpenShift:
- Includes the Operator Lifecycle Manager (OLM).
- Provides a curated OperatorHub with:
- Platform Operators (for OpenShift itself)
- Add-on Operators (databases, messaging, storage, etc.)
- Uses Operators extensively to manage:
- Core platform components
- Upgrades and configuration of cluster services
This means:
- Adding a complex service (e.g., a database cluster) often becomes:
- Install Operator
- Create a Custom Resource
- Let the Operator manage lifecycle
Whereas on bare Kubernetes:
- You might need to deploy and maintain Helm charts or manifests yourself, with less integrated lifecycle management.
Installation, Upgrades, and Lifecycle
Kubernetes: Many Ways to Install
For Kubernetes, there are multiple distributions and installers:
kubeadm, kops, cluster API providers, cloud-vendor managed Kubernetes (EKS, GKE, AKS), etc.- Lifecycle (upgrades, node replacement, etc.) is often:
- Distribution- or vendor-specific
- Managed by your DevOps / platform team or cloud provider
OpenShift: Opinionated Lifecycle Management
OpenShift provides:
- A defined set of deployment models:
- Installer-Provisioned vs User-Provisioned Infrastructure
- Managed OpenShift offerings
- Single-node variants
- A cluster version operator to:
- Orchestrate platform upgrades
- Coordinate Operator and component versions
- A supported and documented upgrade path:
- Tested version combinations
- Backward compatibility guarantees within support windows
For organizations, this means:
- A predictable lifecycle with clear support boundaries
- Less need to design and maintain custom upgrade orchestration
Licensing, Support, and Ecosystem Position
Open Source vs Subscription
Kubernetes:
- Is entirely open source, with no licensing fees.
- Support is:
- Community-based, or
- Provided by a vendor that repackages Kubernetes
OpenShift:
- Is based on open source components (including Kubernetes), but:
- Distributed under Red Hat’s subscription model
- Comes with commercial support, SLAs, and lifecycle guarantees
Cost vs benefit:
- Kubernetes alone:
- Lower direct software cost
- Higher internal engineering and operations cost
- OpenShift:
- Licensing/subscription cost
- Lower internal cost for building and maintaining the base platform, plus vendor accountability
Ecosystem and Integration
Both Kubernetes and OpenShift:
- Leverage the broader cloud-native ecosystem:
- CNCF projects
- Container runtimes
- Storage and network plugins
OpenShift focuses on:
- Curated integrations:
- Tested combinations of CNIs, storage backends, Operators
- Certified partner solutions
- Close integration with:
- Red Hat Enterprise Linux / CoreOS
- Red Hat’s broader portfolio (e.g., Ansible, RHEL, Quay, etc.)
This makes OpenShift particularly attractive where:
- Enterprise-grade support, certifications, and compliance matter.
- Teams want a single throat to choke for the platform stack.
When to Use Kubernetes Alone vs OpenShift
Situations Suited to “Raw” Kubernetes or Minimal Distributions
- You need maximum flexibility in choosing every component.
- You have a strong SRE / platform engineering team.
- Cost sensitivity outweighs the value of a commercial platform.
- You’re building a highly custom platform (e.g., specialized research clusters).
Situations Suited to OpenShift
- You want a ready-to-use enterprise Kubernetes platform.
- Security, multi-tenancy, and compliance are high priorities.
- You prefer a single vendor for platform support and lifecycle.
- You want batteries-included tooling:
- Web console
- Integrated registry
- Built-in monitoring/logging
- Operator-driven services
- You are standardizing across:
- On-premises, private, and public clouds with a consistent environment.
In practice, many organizations end up with:
- Multiple Kubernetes environments, some of which are OpenShift clusters.
- OpenShift used as the primary enterprise platform, with other Kubernetes distributions used for special cases or in clouds where a managed service is already in place.
Summary of Key Differences
Conceptually:
- Kubernetes = core orchestration engine and APIs
- OpenShift = opinionated, integrated Kubernetes platform with enterprise support
Key contrasts:
- Project vs product: Kubernetes is upstream; OpenShift is a distribution.
- Flexibility vs integration: Kubernetes is highly flexible; OpenShift is more integrated and opinionated.
- Security defaults: OpenShift is stricter and more secure by default.
- Developer and operator experience: OpenShift adds a rich web console,
ocCLI enhancements, Routes, and platform services. - Lifecycle and support: OpenShift provides defined upgrade paths and vendor support; raw Kubernetes depends on your chosen vendor or your own team.
Understanding this relationship is essential for deciding where OpenShift fits in an organization’s overall Kubernetes and cloud-native strategy.