• Leading open source PaaS with 5.7 million downloads • Building products for the Kubernetes Ecosystem ◦ Deis Workflow ◦ Deis Helm I have Kubernetes, now what?
images • Configure runtime environment • Manage releases and rollbacks • Run administrative commands • View aggregated logs • Scale out via the process model • Collaborate with a team … with simple, easy-to-use CLIs and APIs
are replaced not modified • images are baked on demand Deployment services use monitoring data to canary, roll forward, roll back Changes propagate through dependency graphs automatically subject to policy Source: https://docs.google.com/presentation/d/11kp272oeNxpvNTZVYMoH5LsdrLRXIm2PTD5DrVjYFZs/edit#slide=id.geba7160b9_10_120
Foundation will be integrating the orchestration layer of the container ecosystem. Joyent, CoreOS, IBM, VMWare, Cisco, Weaveworks and others have all offered up relevant technology and we look forward to working closely as a group to bring the disparate projects together cleanly. Kubernetes has been offered as a seed technology. Source: https://docs.google.com/presentation/d/11kp272oeNxpvNTZVYMoH5LsdrLRXIm2PTD5DrVjYFZs/edit#slide=id.gd6a2cee48_23_49
the words “governor” and “cybernetic” • Runs and manages containers • Inspired and informed by Google’s experiences and internal systems • Supports multiple cloud and bare-metal environments • Supports multiple container runtimes • 100% Open source, written in Go Manage applications, not machines
Helm focused on non-12-factor applications ◦ Databases ◦ Queues ◦ Caches • Helm promotes a Kubernetes-native approach • Helm is a tool for operators, not developers Helm is used to install Workflow
Generally represent identity Queryable by selectors • think SQL ‘select ... where ...’ The only grouping mechanism • pods under a ReplicationController • pods in a Service • capabilities of a node (constraints) Labels
Tightly coupled The atom of scheduling & placement Shared namespace • share IP address & localhost • share IPC, etc. Managed lifecycle • bound to a node, restart in place • can die, cannot be reborn with same ID Example: data puller & web server Consumers Content Manager File Puller Web Server Volume Pod
wrt API server Has 1 job: ensure N copies of a pod • if too few, start some • if too many, kill some • grouped by a selector Cleanly layered on top of the core • all access is by public APIs Replicated pods are fungible • No implied order or identity ReplicationController - name = “my-rc” - selector = {“App”: “MyApp”} - podTemplate = { ... } - replicas = 4 API Server How many? 3 Start 1 more OK How many? 4
together • grouped by a selector Defines access policy • “load balanced” or “headless” Gets a stable virtual IP and port • sometimes called the service portal • also a DNS name VIP is managed by kube-proxy • watches all services • updates iptables when backends change Hides complexity - ideal for non-native apps Client Virtual IP
access to a secured something? • don’t put secrets in the container image! 12-factor says: config comes from the environment • Kubernetes is the environment Manage secrets via the Kubernetes API Inject them as virtual volumes into Pods • late-binding • tmpfs - never touches disk node API Pod Secret
utilization • CPU utilization for now • Probably more later Operates within user-defined min/max bounds Set it and forget it Status: BETA in Kubernetes v1.1 ... Stats
on every node • or a subset of nodes Similar to ReplicationController • principle: do one thing, don’t overload “Which nodes?” is a selector Use familiar tools and patterns Status: ALPHA in Kubernetes v1.1 Pod
any one cloud environment Admin provisions them, users claim them Independent lifetime and fate Can be handed-off between pods and lives until user is done with it Dynamically “scheduled” and managed, like nodes and pods Claim
• name collisions in the API • poor isolation between users • don’t want to expose things like Secrets Solution: Slice up the cluster • create new Namespaces as needed • per-user, per-app, per-department, etc. • part of the API - NOT private machines • most API objects are namespaced • part of the REST URL path • Namespaces are just another API object • One-step cleanup - delete the Namespace • Obvious hook for policy enforcement (e.g. quota)