team Focused on storage, networking, node and other “infrastructure” things In my time at Google: - Worked on Borg & Omega - Machine management & monitoring - BIOS & Linux kernel “I reserve the right to have an opinion, regardless of how wrong I probably am.”
Make sure you actually get what you paid for Kubernetes (and Docker) isolate CPU and memory Don’t handle things like memory bandwidth, disk time, cache, network bandwidth, ... (yet) Predictability at the extremes is paramount
state • Can be taken away very quickly • “Merely” cause slowness when revoked • e.g. CPU, disk time Non-compressible resources • Hold state • Are slower to be taken away • Can fail to be revoked • e.g. Memory, disk space
resource allowed to be used, with a strong guarantee of availability • CPU (seconds/second), RAM (bytes) • Scheduler will not over-commit requests Limit: max amount of a resource that can be used, regardless of guarantees • scheduler ignores limits Repercussions: • request < usage <= limit: resources might be available • usage > limit: throttled or killed CPU 1. 5 Limit
the limit depends on the particular resource Compressible resources: throttle usage • e.g. No more CPU time for you! Non-compressible resources: reclaim • e.g. Write-back and reallocate dirty pages • Failure means process death (OOM) Being correct is more important than optimal CPU 1. 5 Limit
2. Find some clean pages to release (consumes CPU) 3. Write-back some dirty pages (consumes disk time) 4. If necessary, repeat this on another container How long should this be allowed to take? Really: this should be happening all the time Coupled resources
need? How much CPU/RAM does my job need? Do I provision for worst-case? • Expensive, wasteful Do I provision for average case? • High failure rate (e.g. OOM) Benchmark it!
reason about Works well when combined with resource isolation • Having >1 replica per node makes sense Not always applicable • e.g. Memory use scales with cluster size HorizontalPodAutoscaler ...
not enough Resource needs change over time If only we had an “autopilot” mode... • Collect stats & build a model • Predict and react • Manage Pods, Deployments, Jobs • Try to stay ahead of the spikes
autopilot See earlier statement regarding benchmarks - even at Google Kubernetes API is purpose-built for this sort of use-case We need a VerticalPodAutoscaler
isolation • If you want to push the limits, it has to be safe at the extremes People are inherently cautious • Provision for 90%-99% case VPA & strong isolation should give enough confidence to provision more tightly We need to do some kernel work, here
jobs get paused/killed in favor of high-priority jobs Quota • If everyone is important, nobody is important Overcommit • Hedge against rare events with lower QoS/SLA for some work
usage per-container The delta between request and reality is idle -- resell it with a lower SLA • First-tier apps can always get what they paid for - kill second-tier apps Use stats to decide how aggressive to be Let the priority system deal with the debris
to operate • Nodes fail or get upgraded As you approach 100% bookings (requests), consider what happens when things go bad • Nowhere to squeeze the toothpaste! Plan for some idle capacity - it will save your bacon one day • Priorities & rescheduling can make this less expensive