Non-traditional businesses climb to 8.1% of the whole revenue, even 15%~20% in some operators ➤ The new four business models: ➤ Entertainment & Media ➤ M2M ➤ Cloud computing ➤ IT service Source: The Gartner Scenario for Communications Service Providers
& devices ➤ Strict protocol ➤ Reliability & performance ➤ High operation cost Long deploy time cost Complex operation processes Multiple hardware devices co-exists Close ecosystem New business model requires new network functioning
on COTS computers ➤ that may be hosed in datacenter Speedup TTM Save TCO Encourage innovation ➤ Functionalities should be able to: ➤ locate anywhere most effective or inexpensive ➤ speedily combined, deployed, relocated, and upgraded
OS spin-up ➤ align guest OS with VNFs ➤ process mgmt service, startup scripts etc ➤ Provision container ➤ start process in right namespaces and cgroups ➤ no other overhead Average Startup Time (Seconds) Over Five Measurements Data source: Intel white paper Start up time in seconds 0 7.5 15 22.5 30 25 0.38 Container KVM
VNF is able to push through the system is stable and similar in all three runtimes” Packets per Second That a VNF Can Process in Different Environments Data source: Intel white paper Millions 0 7.5 15 22.5 30 direct fwd L2 fwd L3 fwd Host Container KVM
difference ➤ VM show unstable ➤ caused by hypervisor time to process regular interrupts ➤ L2 forwarding ➤ no big difference ➤ container even shows extra latency ➤ extra kernel code execution in cgroups ➤ VM show unstable ➤ cased by same reason above Data source: Intel white paper
using about 125MB when booted ➤ Container ➤ only 17MB ➤ amount of code loaded into memory is significantly less ➤ Deployment density ➤ is limited by incompressible resource ➤ Memory & Disk, while container does not need disk provision Memory footprint 0 35 70 105 140 container KVM 256MB 125 17
disk with full operating system ➤ the final disk image size is often counted by GB ➤ extra processes for porting VM ➤ hypervisor re-configuration ➤ process mgmt service ➤ Container image ➤ share host kernel = smaller image size ➤ can even be: “app binary size + 2~5MB” for deploy ➤ docker multi-stage build (NEW FEATURE) OS Flavor Disk Size Container Image Size Ubuntu 14.04 > 619MB > 188.3MB CentOS 7 > 680MB > 229.6MB Alpine — > 5 MB Busybox — >2MB Data source: Intel white paper
to application ➤ alternative methods: ➤ share folder, port mapping, ENV … ➤ no easy or user friendly tool to help us ➤ Container ➤ user friendly container control tool (dockerd etc) ➤ volume ➤ ENV ➤ …
independent guest kernel ➤ Container ➤ weak isolation level ➤ share kernel of host machine ➤ reinforcement ➤ Capabilities ➤ libseccomp ➤ SELinux/APPArmor ➤ while non of them can be easily applied ➤ e.g. what CAP is needed/unneeded for a specific container? No cloud provider allow user to run containers without wrapping them inside full blown VM!
➤ Network performance ➤ same with VM & container ➤ Resource footprint ➤ small (e.g. 30MB) ➤ Portability & Resilience ➤ use Docker image (i.e. MB) ➤ Configurability ➤ same as Docker ➤ Security & Isolation ➤ hardware virtualization & independent kernel Want to see a demo?
memory footprint ➤ hyperctl exec -t $POD /bin/bash ➤ fork bomb ➤ Do not test this in Docker (without ulimit set) ➤ unless you want to lose your host machine :)
No Yes Yes Startup time 380ms 25s 500ms Portable Image Small Large Small Memory footprint Small Large Small Configurability of app Flexible Complex Flexible Network Performance Good Good Good Backward Compatibility No Yes Yes (bring your own kernel) Security/Isolation Weak Strong Strong
supervised manage multi-apps in one container ➤ try to ensure container order by hacky scripts ➤ try to copy files from one container to another ➤ try to connect to peer container across whole network stack ➤ So Pod is ➤ The group of super-affinity containers ➤ The atomic scheduling unit ➤ The “process group” in container cloud ➤ Also how HyperContainer match to Kubernetes philosophy Pod log app infra container volume init container
1.6.0 release note NODE Pod foo container A container B A B foo VM foo A B 2. CreatContainer(A) 3. StartContainert(A) 4. CreatContainer(B) 5. StartContainer(B) docker runtime hyper runtime 1. RunPodSandbox(foo) Container Runtime Interface (CRI)
can communicate with all other Pods without NAT ➤ Node reach Pod ➤ all nodes can communicate with all Pods (and vice-versa) without NAT ➤ IP addressing ➤ Pod in cluster can be addressed by its IP
➤ Network: Namespace = 1: N ➤ each tenant (created by Keystone) has its own Network ➤ Network Controller is responsible for lifecycle of Network object ➤ a control loop to create/delete Neutron “net” based on API object change
Network can reach each other directly through IP ➤ a Pod’s network mapping to Neutron “port” ➤ kubelet is responsible for Pod network setup ➤ let’s see how kubelet works
HandlePods {Add, Update, Remove, Delete, …} NodeStatus Network Status status Manager PLEG SyncLoop Pod Update Worker (e.g.ADD) • generale Pod status • check volume status (will talk this later) • use hyper runtime to start containers • set up Pod network (see next slide) volume Manager PodUpdate image Manager
➤ Pods and Nodes are isolated into different networks ➤ Hypernetes uses a build-in ipvs as the Service LB ➤ handle all Services in same namespace ➤ follow OnServiceUpdate and OnEndpointsUpdate workflow ➤ ExternalProvider ➤ a OpenStack LB will be created as Service ➤ e.g. curl 58.215.33.98:8078
Linux container: 1. query Nova to find node 2. attach Cinder volume to host path 3. bind mount host path to Pod containers ➤ HyperContainer: ➤ directly attach block devices to Pod ➤ no extra time to query Nova ➤ no need to install full OpenStack Host vol Enhanced Cinder volume plugin Pod Pod mountPath mountPath attach vol desired World reconcile Volume Manager
kubelet Enhanced Cinder Plugin VNF Pod VNF Pod VNF Pod VNF Pod Keystone Neutron Cinder Master Object: Network Ceph kube-apiserver kube-apiserver kube-apiserver The next goal of h8s: modular CNI specific plugin for block devices TPR
and yamls can be found here: ➤ https://github.com/hyperhq/ hypernetes ➤ https://github.com/Metaswitch/ clearwater-docker $ kubectl create -f clearwater-docker/kubernetes/