Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
AWS SQS queues & Kubernetes Autoscaling Pitfall...
Search
Eric Khun
October 26, 2020
Programming
2
450
AWS SQS queues & Kubernetes Autoscaling Pitfalls Stories
Talk at the Cloud Native Computing Foundation meetup @dcard.tw
Eric Khun
October 26, 2020
Tweet
Share
More Decks by Eric Khun
See All by Eric Khun
From PHP to Golang: Migrating a real-time data replication service
erickhun
1
130
Other Decks in Programming
See All in Programming
TROCCO×dbtで実現する人にもAIにもやさしいデータ基盤
nealle
0
280
State of CSS 2025
benjaminkott
1
110
あまり知られていない MCP 仕様たち / MCP specifications that aren’t widely known
ktr_0731
0
280
なぜ今、Terraformの本を書いたのか? - 著者陣に聞く!『Terraformではじめる実践IaC』登壇資料
fufuhu
4
630
ライブ配信サービスの インフラのジレンマ -マルチクラウドに至ったワケ-
mirrativ
1
250
サイトを作ったらNFCタグキーホルダーを爆速で作れ!
yuukis
0
390
エンジニアのための”最低限いい感じ”デザイン入門
shunshobon
0
120
DockerからECSへ 〜 AWSの海に出る前に知っておきたいこと 〜
ota1022
5
1.7k
Terraform やるなら公式スタイルガイドを読もう 〜重要項目 10選〜
hiyanger
13
3.2k
Honoアップデート 2025年夏
yusukebe
1
790
KessokuでDIでもgoroutineを活用する / Go Connect #6
mazrean
0
100
Introduction to Git & GitHub
latte72
0
110
Featured
See All Featured
Facilitating Awesome Meetings
lara
55
6.5k
Stop Working from a Prison Cell
hatefulcrawdad
271
21k
RailsConf 2023
tenderlove
30
1.2k
GraphQLとの向き合い方2022年版
quramy
49
14k
[Rails World 2023 - Day 1 Closing Keynote] - The Magic of Rails
eileencodes
36
2.5k
Designing Experiences People Love
moore
142
24k
Code Reviewing Like a Champion
maltzj
525
40k
Fantastic passwords and where to find them - at NoRuKo
philnash
51
3.4k
Building Flexible Design Systems
yeseniaperezcruz
328
39k
The Success of Rails: Ensuring Growth for the Next 100 Years
eileencodes
46
7.6k
Statistics for Hackers
jakevdp
799
220k
KATA
mclloyd
32
14k
Transcript
AWS SQS queues & Kubernetes Autoscaling Pitfalls Stories Cloud Native
Foundation meetup @dcard.tw @eric_khun
Make it work, Make it right, Make it fast kent
beck (agile manifesto - extreme programming)
Make it work, Make it right, Make it fast kent
beck (agile manifesto - extreme programming)
Make it work, Make it right, Make it fast kent
beck (agile manifesto - extreme programming)
Buffer
None
Buffer • 80 employees , 12 time zones, all remote
Quick intro
None
Main pipelines flow
it can look like ... golang Talk @Maicoin :
None
None
How do we send posts to social medias?
A bit of history... 2010 -> 2012: Joel (founder/ceo) 1
cronjob on a Linode server $20/mo 512 mb of RAM 2012 -> 2017 : Sunil (ex-CTO) Crons running on AWS ElasticBeanstalk / supervisord 2017 -> now: Kubernetes / CronJob controller
AWS Elastic Beanstalk: Kubernetes:
At what scale? ~ 3 million SQS messages per hour
Different patterns for many queues
Are our workers (consumers of the SQS queues ) efficients?
Are our workers efficients?
Are our workers efficients?
Empty messages? > Workers tries to pull messages from SQS,
but receive “nothing” to process
Number of empty messages per queue
Sum of empty messages on all queues
None
1,000,000 API calls to AWS costs 0.40$ We have 7,2B
calls/month for “empty messages” It costs ~$25k/year > Me:
None
AWS SQS Doc
None
Or in the AWS console
Results?
empty messages
AWS
None
$120 > $50 saved daily > $2000 / month >
$25,000 / year (it’s USD, not TWD)
Paid for querying “nothing”
(for the past 8 years )
Benefits - Saving money - Less CPU usage (less empty
requests) - Less throttling (misleading) - Less containers > Better resources allocation: memory/cpu request
Why did that happen?
Default options
None
Never questioning what’s working decently or the way it’s been
always done
What could have helped? Infra as code (explicit options /
standardization) SLI/SLOs (keep re-evaluating what’s important) AWS architecture reviews (taging/recommendations from aws solutions architects)
Make it work, Make it right, Make it fast
Make it work, Make it right, Make it fast
Do you remember?
None
None
None
Need to analytics on Twitter/FB/IG/LKD… on millions on posts faster
workers consuming time
None
What’s the problem?
Resources allocated and not doing anything most of the time
Developer trying to put find compromises on the number of workers
How to solve it?
Autoscaling! (with Keda.sh) Supported by IBM / Redhat / Microsoft
None
Results
None
But notice anything?
Before autoscaling
After autoscaling
After autoscaling
What’s happening?
Downscaling
Why?
delete pod lifecycle
what went wrong - Workers didn’t manage SIGTERM sent by
k8s - Kept processing messages - Messages were halfway processed and killed - Messages were sent back to the the queue again - Less workers because of downscaling
solution - When receiving SIGTERM stop processing new messages -
Set a graceful period long enough to process the current message if (SIGTERM) { // finish current processing and stop receiving new messages }
None
None
And it can also help with sqs empty messages
Make it work, Make it right, Make it fast
Make it work, Make it right, Make it fast
Thanks!
Questions? monitory.io taiwangoldcard.com travelhustlers.co ✈