Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Istio and the Service Mesh Architecture
Search
Manatsawin Hanmongkolchai
September 08, 2018
Programming
3
1k
Istio and the Service Mesh Architecture
DevOps BKK 2018
Manatsawin Hanmongkolchai
September 08, 2018
Tweet
Share
More Decks by Manatsawin Hanmongkolchai
See All by Manatsawin Hanmongkolchai
Nix: Declarative OS
whs
0
82
gRPC load balancing with xDS
whs
0
960
ArgoCD
whs
0
420
Writing Babel Plugin
whs
0
190
What's new in Cloud Next 2019
whs
0
300
A Date with gRPC
whs
1
1.4k
ตีแผ่ Microservice ด้วย Tracing
whs
0
370
Next Generation Smart Home
whs
0
970
State Management with MobX
whs
2
350
Other Decks in Programming
See All in Programming
Amazon RDS 向けに提供されている MCP Server と仕組みを調べてみた/jawsug-okayama-2025-aurora-mcp
takahashiikki
1
110
機能追加とリーダー業務の類似性
rinchoku
2
1.2k
FindyにおけるTakumi活用と脆弱性管理のこれから
rvirus0817
0
500
実用的なGOCACHEPROG実装をするために / golang.tokyo #40
mazrean
1
260
Ruby Parser progress report 2025
yui_knk
1
430
ユーザーも開発者も悩ませない TV アプリ開発 ~Compose の内部実装から学ぶフォーカス制御~
taked137
0
150
パッケージ設計の黒魔術/Kyoto.go#63
lufia
3
430
はじめてのMaterial3 Expressive
ym223
2
260
1から理解するWeb Push
dora1998
7
1.8k
Deep Dive into Kotlin Flow
jmatsu
1
310
AIを活用し、今後に備えるための技術知識 / Basic Knowledge to Utilize AI
kishida
22
5.6k
Updates on MLS on Ruby (and maybe more)
sylph01
1
180
Featured
See All Featured
Designing for humans not robots
tammielis
253
25k
Chrome DevTools: State of the Union 2024 - Debugging React & Beyond
addyosmani
7
840
Raft: Consensus for Rubyists
vanstee
140
7.1k
Producing Creativity
orderedlist
PRO
347
40k
KATA
mclloyd
32
14k
Scaling GitHub
holman
463
140k
Imperfection Machines: The Place of Print at Facebook
scottboms
268
13k
Put a Button on it: Removing Barriers to Going Fast.
kastner
60
4k
Bash Introduction
62gerente
615
210k
YesSQL, Process and Tooling at Scale
rocio
173
14k
Done Done
chrislema
185
16k
Agile that works and the tools we love
rasmusluckow
330
21k
Transcript
Istio and the Service Mesh Architecture DevOps BKK 2018
About me • Manatsawin Hanmongkolchai • Junior Architect at Wongnai
How I sold Istio to my team
How Wongnai monitor microservices
Microservice monitoring • In-service metrics eg. controller time
Microservice monitoring • AWS X-Ray SDK
Microservice monitoring • Sentry
Microservice monitoring • ELB Error Rate
Microservice monitoring These must be integrated into your service AWS
X-Ray
Microservice monitoring The problem in microservice world • Service can
be written in many languages. Not all tools support every languages
Microservice monitoring The problem in microservice world • People in
a rush skip implementing proper monitoring
Meet Istio
Service mesh Istio handle interservice connection Sidecar
How Istio sidecar work? Istio use admission controller to install
2 containers in your pod
How Istio sidecar work? 1. Init container to setup transparent
proxy iptables rule (as root) 2. Envoy running alongside your app as the transparent proxy
What Istio can do for you Monitoring • Network calls
• Tracing
Network monitoring Istio provide insight into your network in layer
7
Total requests 4xx 5xx
Request count of service Response time
Service network monitoring Measured client side Request count Success rate
Resp. time Speed (for TCP) Measured server side
Who call me?
Distributed Tracing • All incoming/outgoing HTTP calls are traced to
Jaeger • Needs to propagate OpenTracing headers from incoming call to outgoing call to track calls correctly
Distributed Tracing • Easiest way is to just integrate Zipkin
OpenTracing into your app
Distributed Tracing
Distributed Tracing
What Istio can do for you • Traffic Management ◦
Routing ▪ Traffic Shifting ▪ Mirror ◦ Fault Injection ◦ Circuit Breaker
Routing • Kubernetes service operates in Layer 4 Cluster IP
Backend Backend Backend Req Req Req Req Req Req
Routing • Istio operate in layer 7 and can do
per-call load balancing Envoy Req Req Req Req Req Req Backend Backend Backend
Split traffic • Split traffic between service (eg. 1% to
new version)
Mirror traffic • Test in production by cloning traffic Envoy
Live version Test version Req
Fault Injection • Intentionally making service worse • Why? Let’s
hear a story
Fault Injection Site Reliability Engineering How Google runs production systems
landing.google.com /sre/book/
#WongnaiIsHiring • Wongnai is looking for our first Site Reliability
Engineer • careers.wongnai.com
Chubby
Fault Injection Over time, we found that the failures of
the global instance of Chubby consistently generated service outages.
Fault Injection As it turns out, true global Chubby outages
are so infrequent that service owners began to add dependencies to Chubby assuming that it would never go down.
Fault Injection The solution to this Chubby scenario is interesting:
SRE makes sure that global Chubby meets, but does not significantly exceed, its service level objective.
Fault Injection In any given quarter, if a true failure
has not dropped availability below the target, a controlled outage will be synthesized by intentionally taking down the system.
Fault Injection • Slow down services ◦ Delay 80% of
requests for 5 seconds • Make errors ◦ Return 500 error code for 80% of requests
Circuit Breaker Remove a backend from service if it return
too many errors in a row Frontend Backend Work Queue 503 Timeout F5
Summary Istio provide visibility and configurability to your network. This
is traditionally done by adding library, but in a microservice world you need a cross language solution
The catch Here’s what we found while moving to Istio
• While requiring zero code changes, your service must already be well behaved cloud application
The catch • Do not connect directly to pod IP
(eg. no service discovery - just use cluster IP and avoid headless service)
The catch • Do not mix port type in the
cluster (eg. don’t run HTTP server on port 6379 with another pod running TCP service at the same port)
The catch • Set the Host header to the destination.
Don’t connect to gateway and set Host header to cooking. ◦ This case is really hard to debug...
The catch • External services (ie. outside Kubernetes) but in
the capturing IP range must have ServiceEntry defined ◦ ServiceEntry is cluster-wide
Slides on speakerdeck.com/whs