Tracing the linux kernel commit history to understand what is cgroup eBPF and how Cilium use it to perform NAT on system calls to replace kube-proxy's iptable rules.
• commit history • connect syscall example • How does Cilium use cgroup eBPF? • How does Cilium agent prepare the eBPF map? • How does Cilium eBPF program utilize the map? Outline
amount of executions in kernel. • Full Control • Able to change kernel/application behavior on the fly. Other eBPF Applications • [XDP] https://blog.cloudflare.com/how-to-drop-10-million-packets/ • [XDP] https://blog.cloudflare.com/unimog-cloudflares-edge-load-balancer/ • [XDP] https://engineering.fb.com/open-source/open-sourcing-katran-a- scalable-network-load-balancer • https://github.com/zoidbergwill/awesome-ebpf
2020) https://docs.google.com/presentation/d/1w2zlpGWV7JUhHYd37El_AUZzyUNSvDfktrF5MJ5G8Bs/edit#slide=id.g746fc02b5b_3_33 How does Cilium replace kube-proxy?
2020) https://docs.google.com/presentation/d/1w2zlpGWV7JUhHYd37El_AUZzyUNSvDfktrF5MJ5G8Bs/edit#slide=id.g746fc02b5b_3_33 How does Cilium replace kube-proxy?
example • Cilium Agent Overview • LB4_SERVICES_MAP_V2 preparation • Cilium kube-proxy replacement (application side) • NAT on per connect/getpeername/sendmsg/recvmsg syscall, not on per packet Recap