service run on? Avoid port conflicts? How does the application code find the Guest Book service? How do we keep all of these service running? What happens if a host machine has trouble? Are the services healthy? How do we scale when load changes? Run this in another environment? QA, dev, another cloud, your servers?
java:8 (not frequently updated) Next Layer - Dependency JARs (not frequently updated) Last Layer - Application JAR (frequently updated) Use dockerfile-maven-plugin, copy-dependencies or slimfast
{ cell = 'ic' } // Cell (cluster) to run in binary = '.../hello_world_webserver' // Program to run args = { port = '%port%' } // Command line parameters requirements = { // Resource requirements ram = 100M disk = 100M cpu = 0.1 } replicas = 5 // Number of tasks } 10000
shard BorgMaster link shard UI shard BorgMaster link shard UI shard BorgMaster link shard UI shard Scheduler borgcfg web browsers scheduler Borglet Borglet Borglet Borglet Config file BorgMaster link shard UI shard persistent store (Paxos) Binary What just happened?
root of the words “governor” and “cybernetic” • Infrastructure for containers • Schedules, runs, and manages containers on virtual and physical machines • Platform for automating deployment, scaling, and operations • Inspired and informed by Google’s experiences and internal systems • 100% Open source, written in Go
Deployment Replicas → 2 Pod frontend Pod - cb-axk3u type = Couchbase version = 1.0 Pod CB-Disk Pod frontend Pod - cb-a94kd type = Couchbase version = 1.0 Volume Mount
2 Pod frontend Pod - cb-0 type = Couchbase version = 1.0 Pod Pod frontend Pod - cb-1 type = Couchbase version = 1.0 cb-0 Volume Mount cb-1 Volume Mount
2 Pod frontend Pod - cb-0 type = Couchbase version = 1.0 Pod Pod frontend Pod - cb-1 type = Couchbase version = 1.0 cb-0 Volume Mount Automatic Provisioning cb-1 Volume Mount
the "closest" healthy cluster. Standard Kubernetes service load balancing within each cluster. Can be extended to divert traffic away from "healthy-but-saturated" clusters. Cross-cluster Load Balancing
Don't run my replicas in the same failure domain (host/rack/zone) Topology • Same host • Same rack • Same zone • Same metro region • Same sub-continent Absolute affinity
Cluster 1 us-east1-b Cluster 3 europe-west1-b Cluster 4 asia-east1-b API API API API Cluster 2 - us-central1-b Federation API Server Federation Controller Federation Key/value store (etcd) Federation API contexts: - context: cluster: federation-cluster user: federation-cluster
Cloud) Kubernetes Cluster 2 (On-Prem) Kubernetes Cluster 3 (Another Cloud) Federation Control Plane kubectl create -f nginx-service.yaml nginx Service nginx Service nginx Service DNS