Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Concurrency Basics for Elixir
Search
Sponsored
·
Your Podcast. Everywhere. Effortlessly.
Share. Educate. Inspire. Entertain. You do you. We'll handle the rest.
→
Maciej Kaszubowski
August 02, 2018
Programming
0
140
Concurrency Basics for Elixir
Slides from internal presentation at
https://appunite.com
Maciej Kaszubowski
August 02, 2018
Tweet
Share
More Decks by Maciej Kaszubowski
See All by Maciej Kaszubowski
Error-free Elixir
mkaszubowski
0
420
Modular Design in Elixir (ElixirConf EU 2019)
mkaszubowski
2
880
The Big Ball of Nouns
mkaszubowski
0
120
Modular Design in Elixir
mkaszubowski
1
400
Our three years with Elixir
mkaszubowski
0
260
Distributed Elixir
mkaszubowski
0
180
Software Architecture
mkaszubowski
0
150
Let it crash - fault tolerance in Elixir/OTP
mkaszubowski
0
510
CRDTs - The science behind Phoenix Presence
mkaszubowski
2
290
Other Decks in Programming
See All in Programming
AI時代の認知負荷との向き合い方
optfit
0
140
それ、本当に安全? ファイルアップロードで見落としがちなセキュリティリスクと対策
penpeen
7
2.4k
CSC307 Lecture 02
javiergs
PRO
1
770
Kotlin Multiplatform Meetup - Compose Multiplatform 외부 의존성 아키텍처 설계부터 운영까지
wisemuji
0
190
AI時代のキャリアプラン「技術の引力」からの脱出と「問い」へのいざない / tech-gravity
minodriven
20
6.7k
Amazon Bedrockを活用したRAGの品質管理パイプライン構築
tosuri13
4
240
コマンドとリード間の連携に対する脅威分析フレームワーク
pandayumi
1
440
疑似コードによるプロンプト記述、どのくらい正確に実行される?
kokuyouwind
0
380
なぜSQLはAIぽく見えるのか/why does SQL look AI like
florets1
0
440
フロントエンド開発の勘所 -複数事業を経験して見えた判断軸の違い-
heimusu
7
2.8k
Implementation Patterns
denyspoltorak
0
280
Package Management Learnings from Homebrew
mikemcquaid
0
200
Featured
See All Featured
Evolution of real-time – Irina Nazarova, EuRuKo, 2024
irinanazarova
9
1.2k
Chasing Engaging Ingredients in Design
codingconduct
0
110
16th Malabo Montpellier Forum Presentation
akademiya2063
PRO
0
47
Dealing with People You Can't Stand - Big Design 2015
cassininazir
367
27k
AI Search: Implications for SEO and How to Move Forward - #ShenzhenSEOConference
aleyda
1
1.1k
Balancing Empowerment & Direction
lara
5
880
Kristin Tynski - Automating Marketing Tasks With AI
techseoconnect
PRO
0
130
Groundhog Day: Seeking Process in Gaming for Health
codingconduct
0
90
Exploring the Power of Turbo Streams & Action Cable | RailsConf2023
kevinliebholz
37
6.3k
Claude Code どこまでも/ Claude Code Everywhere
nwiizo
61
52k
Color Theory Basics | Prateek | Gurzu
gurzu
0
190
A better future with KSS
kneath
240
18k
Transcript
Concurrency basics For Elixir-based Systems
None
So, what’s concurrency?
Sequential Execution (3 functions, 1 thread)
Sequential Execution (3 functions, 1 thread) Concurrent Execution (3 functions,
3 threads)
Sequential Execution (3 functions, 1 thread) Concurrent Execution (3 functions,
3 threads) Preemptive scheduling
Where’s the benefit?
Req1 Req2 Req3 Resp Sequential Execution time Waiting time
Req1 Req2 Req3 Resp Req1 Resp Req2 Req3 Sequential Concurrent
Execution time Waiting time
CPU bound Re Re Re Res Re Res Re Re
I/O bound
Concurrent or Parallel What’s the difference?
Concurrent Execution (3 functions, 3 threads)
Concurrent Execution (3 functions, 3 threads) Parallel Execution (3 functions,
3 threads, 2 cores) core 1 core 2
root@kingschat-api-c8f8d6b76-4j65j:/app# nproc 12 root@tahmeel-api-prod-b5979bdc6-q5wz6:/# nproc 1 How many cores?
Concurrent Execution (3 functions, 3 threads) Parallel Execution (3 functions,
3 threads, 2 cores) core 1 core 2 (by default) One erlang scheduler per core
:observer_cli.start()
None
Req1 Req2 Req3 Resp Req1 Resp Req2 Req3 Sequential Concurrent
Execution time Waiting time Req1 Resp Req2 Req3 Parallel
Sequential execution
Phoenix Request Req 1
Phoenix Request Resp
Phoenix Request Req 2
Phoenix Request Resp
Phoenix Request Req 3
Phoenix Request Resp
Concurrent execution
Phoenix Request
Phoenix Request Task 1 Task 2 Task 3
Phoenix Request Task 1 Task 2 Task 3 Req 1
Req 2 Req 3
Phoenix Request Task 1 Task 2 Task 3 Resp Resp
Resp
Phoenix Request Task 1 Task 2 Task 3
R1 APP Server DB Server (3 cores) R2 R1 R2
Time Execution time Waiting time
R1 APP Server DB Server (3 cores) Send resp R2
R3 R1 R2 R3 Time Execution time Waiting time
How much can we gain?
Amdahl’s Law
Amdahl’s Law
Amdahl’s Law in a nutshell The more synchronisation, the less
benefit from multiple cores
R1 APP Server Send resp R2 R3 R1 R2 R3
Time Execution time Waiting time Almost 100% parallel (almost no synchronisation) DB Server (3 cores)
But…
R1 APP Server Send resp R2 R3 R1 R2 R3
Time Execution time Waiting time This is not constant DB Server (3 cores)
R1 APP Server Send resp R2 R3 R1 R2 R3
Time Execution time Waiting time This is not infinite DB Server (3 cores)
R1 APP Server R2 R3 R1 R2 R3 R4 Time
Execution time Waiting time DB Server (3 cores)
R1 APP Server R2 R3 R1 R2 R3 R4 Time
Execution time Waiting time DB Server (3 cores)
R1 APP Server R2 R3 R1 R2 R3 R4 R4
Time Execution time Waiting time DB Server (3 cores)
R1 APP Server R2 R3 R1 R2 R3 R4 R4
Time Execution time Waiting time DB Server (3 cores)
R1 APP Server R2 R3 R1 R2 R3 R4 R4
Time Execution time Waiting time DB Server (3 cores)
R1 APP Server Send resp R2 R3 R1 R2 R3
R4 R4 Time Execution time Waiting time DB Server (3 cores)
R1 APP Server R2 R3 R1 R2 R3 R4 R4
Time Execution time Waiting time R5 R6 R7 R5 R6 R7 DB Server (3 cores)
Phoenix Request Task 1 Task 2 Task 3 Req 1
Req 2 Req 3 Remember this?
This isn’t exactly true
None
Connection pool (Prevents from overworking the DB)
Pool Manager (Blocks until a free worker is available)
None
Pool Manager (Blocks until a free worker is available)
None
It gets worse
Pool Manager Mailbox Has to be synchronised
Pool Manager Message Passing Is just copying data in shared
memory
Pool Manager Remember semaphores?
Logger Metrics Sentry
Network stack
Network stack
Network stack
Network stack Sentry Metrics
OS Threads (Garbage Collection) Data Bus Virtual Machines Memory characteristics
(e.g. processor caches) … Other synchronisation points
That’s hard
That’s REALLY hard
That’s REALLY hard Seriously, people spend their entire careers on
this
So, what to do?
Measure
Measure Measure
Measure Measure Measure
Measure ON PRODUCTION
Measure ON PRODUCTION You WILL get false results on staging/locally
Measure Entire system You WILL get false results for single
functions
Measure ONLY IF YOU HAVE TRAFFIC
“premature optimization is the root of all evil”
If something takes X ms, it will always take X
ms.
Async execution cannot “remove” this time It can only hide
it
BACK PRESSURE
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer Stop
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer OK, give me more
Producent Consumer Consumer
None
Back pressure
Thanks!