Kubernetes open source project • Google Container Engine (GKE) developer experience Software Engineer at Microsoft Azure • Docker on Windows • Azure Container Registry • Docker images for ASP.NET • Developed official C#/.NET library for Docker Say hello at @ahmetb.
those used internally at Google became available as open source, through various channels: • People reading research papers from Google are implementing the software as open source. • Companies like Google started to open source tools they are using internally • Many people who left Google needed similar tools at their new jobs/projects.
The mission was to make sure open source projects that make it possible to run software, distributed on tens of thousands of shared nodes successful. It’s part of The Linux Foundation. Founding members: Google, CoreOS, Docker, eBay, Twitter, IBM, Intel, Mesosphere, VMWare and many others. Currently 100+ members, including AWS, Microsoft, Oracle and Red Hat. Proprietary + Confidential Platinum Members
Critical for creating a future with scalable infrastructure and distributed software. They no longer belong to the companies who built them in the first place, CNCF owns them.
have separate roles. Developed independently. Scaled independently. Crashes independently. Dynamic scheduling Apps crash and nodes die all the time. Make peace with microservices axiom and you can focus on less failure modes. Packaging Containers revolutionized how we package and deliver software using containers. Portable, atomic, isolated. Docker has showed it is possible.
compute nodes to your infrastructure. Multi-tenancy (can multiple users/teams use your clusters safely) Meet Kubernetes Kubernetes solves the scale needs elegantly It drives the clusters to the goal state defined by declarative configuration (i.e. manifest files) Enterprise-grade, production-ready. Declarative application management Infrastructure as code. To what extent can you declaratively describe your apps, configuration, secrets, networks? Can an intern deploy a copy of your whole stack?
can check these files into Git, and as you deploy new versions of the app, you can just apply the new manifest to the cluster. apiVersion: apps/v1beta1 kind: Deployment metadata: name: guestbook spec: selector: matchLabels: run: guestbook tier: frontend template: metadata: labels: run: guestbook tier: frontend spec: replicas: 3 containers: - image: gcr.io/ahmetb/guestbook:v2 name: frontend ports: - containerPort: 80
matchLabels: run: guestbook tier: frontend template: metadata: labels: run: guestbook tier: frontend spec: replicas: 3 containers: - image: gcr.io/ahmetb/guestbook:v2 name: frontend ports: - containerPort: 80 Kubernetes resources are created declaratively, through manifest files. You can check these files into Git, and as you deploy new versions of the app, you can just apply the new manifest to the cluster.
can check these files into Git, and as you deploy new versions of the app, you can just apply the new manifest to the cluster. apiVersion: apps/v1beta1 kind: Deployment metadata: name: guestbook spec: selector: matchLabels: run: guestbook tier: frontend template: metadata: labels: run: guestbook tier: frontend spec: replicas: 3 containers: - image: gcr.io/ahmetb/guestbook:v2 name: frontend ports: - containerPort: 80
run: guestbook tier: frontend template: metadata: labels: run: guestbook tier: frontend spec: replicas: 3 containers: - image: gcr.io/ahmetb/guestbook:v2 name: frontend ports: - containerPort: 80 guestbook-app.yaml Kubernetes resources are created declaratively, through manifest files. You can check these files into Git, and as you deploy new versions of the app, you can just apply the new manifest to the cluster.
run: guestbook tier: frontend template: metadata: labels: run: guestbook tier: frontend spec: replicas: 3 containers: - image: gcr.io/ahmetb/guestbook:v2 name: frontend ports: - containerPort: 80 guestbook-app.yaml Kubernetes resources are created declaratively, through manifest files. You can check these files into Git, and as you deploy new versions of the app, you can just apply the new manifest to the cluster.
source community. Google Container Engine (GKE) gives you a production-ready cluster in a few minutes. Google engineers do on-calls for your cluster. Easy migration to/from cloud Run your Docker containers locally in Minikube, then move to the cloud using the same manifest files: $ kubectl apply -f manifests/*.yaml Service discovery kube-dns add-on helps you communicate any service in the cluster by its name Example: $ curl http://guestbook:80 Some things you get with Kubernetes
particular binary running in production. Rollback strategy Even when you use manifest files, can you rollback a service and its dependencies, configuration, etc safely? Tooling does not matter Most tools are fine. But, Spinnaker (by Netflix, now open source) offers advanced primitives and ways to build pipelines. (If your deployments require complex machinery). Automate your deployments Engineering time spent on each deployments is a waste. Automate your way out! CI/CD
proxy. No change to your application code. Proxy sync with a rule/routing list. Only configure the service mesh dynamically. Modern cluster networking You can enforce cluster-wide: • retry policies • timeouts • access control policies • encryption • authorization • custom routing
since 2001. Open sourced in 2008. Everything inside Google is protobuf. Use a RPC framework You probably stop using JSON REST APIs between services in your company. It’s prone to human errors, cannot easily revision the API, difficult to do streaming and compression. RPC frameworks gRPC Google also has released gRPC, a framework that makes it easy to use protobuf. Easy client/server code generation. Used by many open source projects, Square, Netflix etc.
} message UserRequest { string id = 1; bool includeDetails = 2; } message UserReply { string id = 1; string username = 2; string email = 3; } The client and server code for this UserService is automatically generated, including the message types. Compile-time safety.
passes through microservices: • Which microservices does the request pass through • How long time is spent on each microservice • How long a function is taking in a service Often sampled: 1/100 of requests It’s not working correctly, if you’re not observing. Black-box monitoring Looking at application or node memory/CPU/network metrics from outside. Also, health of an application as the user sees it. White-box monitoring Application exposes its internal metrics through an HTTP endpoint, collected by a tool. Counter: http.requests=5256 Gauge: http.connections=12 Metrics can be aggregated: → all instances of a service → all services in a region → apps labeled as staging=true Prometheus is the de-facto solution.
22 uptime_minutes 1128 accounts.created 12 accounts.logins.success 560 accounts.logins.failure 4 rpc.sent.count 880 rpc.sent.avg_sec 1.3525677 Whitebox: a /metrics endpoint Write queries to find out averages/percentiles/sums of these metrics, for all instances of this app. Take queries, make them alerts: - if no successful logins in past 5m, alert! - if failed logins are >75%, alert! - if mean uptime goes below 5 mins, alert! - if cpu avg stays above 80% for 15m, alert Assume this is for a frontend service that handles the /login, /signup requests.
“trusted” requestor? Your applications should have an identity verifiable by a public-key infrastructure (PKI). Use JWT. Istio also provides strong identity automatically, no changes to application code. In-cluster traffic security Can you identify and allow/block traffic to microservices? Authorization A trusted app does not mean it should be able to call all other services in the cluster. ACLs (access control lists) Kubernetes offers Network Policies (like firewalls)
cloud-native platforms. Originated from Cloud Foundry. Examples: • “Give me a MySQL database and deliver credentials to the app.” • “Provision a storage bucket for my app.” Service brokerage Terminology Service Broker: a server that provisions resources. Service Instance: a request for provisioning new instance of a service (e.g.creating a database) Service Instance Binding: associating a service instance with an application. i.e. delivering credentials and access details. Service Catalog: orchestrates the concepts above in a platform, like Kubernetes.
planName: smalldb Terminology Service Broker: a server that provisions resources. Service Instance: a request for provisioning new instance of a service (e.g.creating a database) Kubernetes Service catalog – example concept (alpha, subject to change)
guestbook-db secretName: guestbook-db-password Terminology Service Broker: a server that provisions resources. Service Instance: a request for provisioning new instance of a service (e.g.creating a database) Kubernetes Service catalog – example concept (alpha, subject to change)