Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
AWS SQS queues & Kubernetes Autoscaling Pitfall...
Search
Eric Khun
October 26, 2020
Programming
2
450
AWS SQS queues & Kubernetes Autoscaling Pitfalls Stories
Talk at the Cloud Native Computing Foundation meetup @dcard.tw
Eric Khun
October 26, 2020
Tweet
Share
More Decks by Eric Khun
See All by Eric Khun
From PHP to Golang: Migrating a real-time data replication service
erickhun
1
130
Other Decks in Programming
See All in Programming
もうちょっといいRubyプロファイラを作りたい (2025)
osyoyu
1
440
MCPとデザインシステムに立脚したデザインと実装の融合
yukukotani
4
1.4k
知っているようで知らない"rails new"の世界 / The World of "rails new" You Think You Know but Don't
luccafort
PRO
1
160
Cache Me If You Can
ryunen344
2
1.4k
実用的なGOCACHEPROG実装をするために / golang.tokyo #40
mazrean
1
280
アプリの "かわいい" を支えるアニメーションツールRiveについて
uetyo
0
270
Oracle Database Technology Night 92 Database Connection control FAN-AC
oracle4engineer
PRO
1
450
チームのテスト力を鍛える
goyoki
2
170
Tool Catalog Agent for Bedrock AgentCore Gateway
licux
6
2.5k
Namespace and Its Future
tagomoris
6
700
Design Foundational Data Engineering Observability
sucitw
3
200
概念モデル→論理モデルで気をつけていること
sunnyone
2
230
Featured
See All Featured
Templates, Plugins, & Blocks: Oh My! Creating the theme that thinks of everything
marktimemedia
31
2.5k
Rails Girls Zürich Keynote
gr2m
95
14k
The Web Performance Landscape in 2024 [PerfNow 2024]
tammyeverts
9
810
Speed Design
sergeychernyshev
32
1.1k
The Pragmatic Product Professional
lauravandoore
36
6.9k
How GitHub (no longer) Works
holman
315
140k
Typedesign – Prime Four
hannesfritz
42
2.8k
Evolution of real-time – Irina Nazarova, EuRuKo, 2024
irinanazarova
8
920
VelocityConf: Rendering Performance Case Studies
addyosmani
332
24k
Helping Users Find Their Own Way: Creating Modern Search Experiences
danielanewman
29
2.9k
Visualizing Your Data: Incorporating Mongo into Loggly Infrastructure
mongodb
48
9.7k
A designer walks into a library…
pauljervisheath
207
24k
Transcript
AWS SQS queues & Kubernetes Autoscaling Pitfalls Stories Cloud Native
Foundation meetup @dcard.tw @eric_khun
Make it work, Make it right, Make it fast kent
beck (agile manifesto - extreme programming)
Make it work, Make it right, Make it fast kent
beck (agile manifesto - extreme programming)
Make it work, Make it right, Make it fast kent
beck (agile manifesto - extreme programming)
Buffer
None
Buffer • 80 employees , 12 time zones, all remote
Quick intro
None
Main pipelines flow
it can look like ... golang Talk @Maicoin :
None
None
How do we send posts to social medias?
A bit of history... 2010 -> 2012: Joel (founder/ceo) 1
cronjob on a Linode server $20/mo 512 mb of RAM 2012 -> 2017 : Sunil (ex-CTO) Crons running on AWS ElasticBeanstalk / supervisord 2017 -> now: Kubernetes / CronJob controller
AWS Elastic Beanstalk: Kubernetes:
At what scale? ~ 3 million SQS messages per hour
Different patterns for many queues
Are our workers (consumers of the SQS queues ) efficients?
Are our workers efficients?
Are our workers efficients?
Empty messages? > Workers tries to pull messages from SQS,
but receive “nothing” to process
Number of empty messages per queue
Sum of empty messages on all queues
None
1,000,000 API calls to AWS costs 0.40$ We have 7,2B
calls/month for “empty messages” It costs ~$25k/year > Me:
None
AWS SQS Doc
None
Or in the AWS console
Results?
empty messages
AWS
None
$120 > $50 saved daily > $2000 / month >
$25,000 / year (it’s USD, not TWD)
Paid for querying “nothing”
(for the past 8 years )
Benefits - Saving money - Less CPU usage (less empty
requests) - Less throttling (misleading) - Less containers > Better resources allocation: memory/cpu request
Why did that happen?
Default options
None
Never questioning what’s working decently or the way it’s been
always done
What could have helped? Infra as code (explicit options /
standardization) SLI/SLOs (keep re-evaluating what’s important) AWS architecture reviews (taging/recommendations from aws solutions architects)
Make it work, Make it right, Make it fast
Make it work, Make it right, Make it fast
Do you remember?
None
None
None
Need to analytics on Twitter/FB/IG/LKD… on millions on posts faster
workers consuming time
None
What’s the problem?
Resources allocated and not doing anything most of the time
Developer trying to put find compromises on the number of workers
How to solve it?
Autoscaling! (with Keda.sh) Supported by IBM / Redhat / Microsoft
None
Results
None
But notice anything?
Before autoscaling
After autoscaling
After autoscaling
What’s happening?
Downscaling
Why?
delete pod lifecycle
what went wrong - Workers didn’t manage SIGTERM sent by
k8s - Kept processing messages - Messages were halfway processed and killed - Messages were sent back to the the queue again - Less workers because of downscaling
solution - When receiving SIGTERM stop processing new messages -
Set a graceful period long enough to process the current message if (SIGTERM) { // finish current processing and stop receiving new messages }
None
None
And it can also help with sqs empty messages
Make it work, Make it right, Make it fast
Make it work, Make it right, Make it fast
Thanks!
Questions? monitory.io taiwangoldcard.com travelhustlers.co ✈