bit of history... ... ‘14 June 6: K8s commits OpenShift has roots older than Kubernetes This accounts for some of their differences ‘15 July ‘15: K8s v1.0 ‘16
container cluster orchestrator. It has a proliferation of words: • Control Plane and Controller - same • Node - same • Pod - yup • Deployment - different, but key concept • Service - “” • Ingress and Ingress Controller- different, but
loop(s), checking that actual state == desired state • Controller - a control plane member implementing state reconciliation • The default control plane schedules pods onto cluster Nodes
set of Pods and a manner for accessing them • By default, a Service provides an endpoint on the cluster network (not external access) • Usually a Service chooses Pods based on a label selector (eg, `role=frontend`)
rules about external access to a Service • Load balancing, SSL termination and name-based virtual hosting • Typically HTTP at L7 (but depends on the …) • Ingress Controller required
to make a PaaS-like experience • Application Oriented: The Deployment config • Intelligent security, config defaults: multi-tenant, elaborated on the RBAC core in k8s • Integrated container registry, base for • Build configurations • Image streams: of image tags from the registry - can trigger rebuild of apps atop those base images • Deployment Configuration: ties together application items • Route: Getting external traffic to the App
cluster OpenShift promotes build elements to first-class abstractions on the platform • Integrated container registry • Software catalog • Build configuration • Image stream: Tagged images, source in registry, rollbacks to arbitrary points on that stream, rebuild apps when FROM is updated
OpenShift promotes some new elements. These tools know about them • Web console built around the Project -> App • Graphical tools for native k8s things like volume claims, etc • oc: CLI tool • odo: CLI tool for developers
Kubernetes namespace, which isolates resources and access • Intelligent RBAC defaults and user role • Project defines and seals an “application” • … in a way flexible enough for various architectures • Projects enable multi-tenant use of an OpenShift cluster with access privileges determined by the identity of the user or the team they belong to
resources from multiple K8s and OpenShift components • A deployment config: • Contains one or more application Pods (and thus their containers) • … again, in a way flexible enough for various architectures • Lists services related by selectors • Built on the kubernetes Replication Controller (rather than ReplicaSet) • Knows how to build my app! • Tracks build config, build output (including pipelines), other development keys
Routes predate Ingress resource/controllers • And remain considerably easier to think about and use • OpenShift admins define Routers -- effectively, edge routing between the cluster SDN and the real world where your customers live • HAProxy: L7 is in the box • Easy TLS, edge or pass through
to get distributed system benefits • Deployment choices reduced: SDN, Ingress controller and LB costs • But what about Layer 4? • What about site specific SDN concerns? • Kubernetes flexibility: • Define alternative implementations • OpenShift Routers: can be replaced, or • Kubernetes Ingress