Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
AWS SQS queues & Kubernetes Autoscaling Pitfall...
Search
Eric Khun
October 26, 2020
Programming
2
430
AWS SQS queues & Kubernetes Autoscaling Pitfalls Stories
Talk at the Cloud Native Computing Foundation meetup @dcard.tw
Eric Khun
October 26, 2020
Tweet
Share
More Decks by Eric Khun
See All by Eric Khun
From PHP to Golang: Migrating a real-time data replication service
erickhun
1
100
Other Decks in Programming
See All in Programming
Ruby on cygwin 2025-02
fd0
0
140
チームリードになって変わったこと
isaka1022
0
200
Pythonでもちょっとリッチな見た目のアプリを設計してみる
ueponx
1
560
さいきょうのレイヤードアーキテクチャについて考えてみた
yahiru
3
750
Java Webフレームワークの現状 / java web framework at burikaigi
kishida
9
2.2k
第3回 Snowflake 中部ユーザ会- dbt × Snowflake ハンズオン
hoto17296
4
370
SpringBoot3.4の構造化ログ #kanjava
irof
2
990
ペアーズでの、Langfuseを中心とした評価ドリブンなリリースサイクルのご紹介
fukubaka0825
2
320
DROBEの生成AI活用事例 with AWS
ippey
0
130
技術を根付かせる / How to make technology take root
kubode
1
250
PHPカンファレンス名古屋2025 タスク分解の試行錯誤〜レビュー負荷を下げるために〜
soichi
1
190
ファインディLT_ポケモン対戦の定量的分析
fufufukakaka
0
710
Featured
See All Featured
Site-Speed That Sticks
csswizardry
4
380
GitHub's CSS Performance
jonrohan
1030
460k
How STYLIGHT went responsive
nonsquared
98
5.4k
Documentation Writing (for coders)
carmenintech
67
4.6k
Raft: Consensus for Rubyists
vanstee
137
6.8k
Unsuck your backbone
ammeep
669
57k
Building Adaptive Systems
keathley
40
2.4k
Mobile First: as difficult as doing things right
swwweet
223
9.3k
CoffeeScript is Beautiful & I Never Want to Write Plain JavaScript Again
sstephenson
160
15k
Large-scale JavaScript Application Architecture
addyosmani
511
110k
Building Your Own Lightsaber
phodgson
104
6.2k
How to Ace a Technical Interview
jacobian
276
23k
Transcript
AWS SQS queues & Kubernetes Autoscaling Pitfalls Stories Cloud Native
Foundation meetup @dcard.tw @eric_khun
Make it work, Make it right, Make it fast kent
beck (agile manifesto - extreme programming)
Make it work, Make it right, Make it fast kent
beck (agile manifesto - extreme programming)
Make it work, Make it right, Make it fast kent
beck (agile manifesto - extreme programming)
Buffer
None
Buffer • 80 employees , 12 time zones, all remote
Quick intro
None
Main pipelines flow
it can look like ... golang Talk @Maicoin :
None
None
How do we send posts to social medias?
A bit of history... 2010 -> 2012: Joel (founder/ceo) 1
cronjob on a Linode server $20/mo 512 mb of RAM 2012 -> 2017 : Sunil (ex-CTO) Crons running on AWS ElasticBeanstalk / supervisord 2017 -> now: Kubernetes / CronJob controller
AWS Elastic Beanstalk: Kubernetes:
At what scale? ~ 3 million SQS messages per hour
Different patterns for many queues
Are our workers (consumers of the SQS queues ) efficients?
Are our workers efficients?
Are our workers efficients?
Empty messages? > Workers tries to pull messages from SQS,
but receive “nothing” to process
Number of empty messages per queue
Sum of empty messages on all queues
None
1,000,000 API calls to AWS costs 0.40$ We have 7,2B
calls/month for “empty messages” It costs ~$25k/year > Me:
None
AWS SQS Doc
None
Or in the AWS console
Results?
empty messages
AWS
None
$120 > $50 saved daily > $2000 / month >
$25,000 / year (it’s USD, not TWD)
Paid for querying “nothing”
(for the past 8 years )
Benefits - Saving money - Less CPU usage (less empty
requests) - Less throttling (misleading) - Less containers > Better resources allocation: memory/cpu request
Why did that happen?
Default options
None
Never questioning what’s working decently or the way it’s been
always done
What could have helped? Infra as code (explicit options /
standardization) SLI/SLOs (keep re-evaluating what’s important) AWS architecture reviews (taging/recommendations from aws solutions architects)
Make it work, Make it right, Make it fast
Make it work, Make it right, Make it fast
Do you remember?
None
None
None
Need to analytics on Twitter/FB/IG/LKD… on millions on posts faster
workers consuming time
None
What’s the problem?
Resources allocated and not doing anything most of the time
Developer trying to put find compromises on the number of workers
How to solve it?
Autoscaling! (with Keda.sh) Supported by IBM / Redhat / Microsoft
None
Results
None
But notice anything?
Before autoscaling
After autoscaling
After autoscaling
What’s happening?
Downscaling
Why?
delete pod lifecycle
what went wrong - Workers didn’t manage SIGTERM sent by
k8s - Kept processing messages - Messages were halfway processed and killed - Messages were sent back to the the queue again - Less workers because of downscaling
solution - When receiving SIGTERM stop processing new messages -
Set a graceful period long enough to process the current message if (SIGTERM) { // finish current processing and stop receiving new messages }
None
None
And it can also help with sqs empty messages
Make it work, Make it right, Make it fast
Make it work, Make it right, Make it fast
Thanks!
Questions? monitory.io taiwangoldcard.com travelhustlers.co ✈