Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Distributed Elixir
Search
Maciej Kaszubowski
July 07, 2018
Programming
0
150
Distributed Elixir
Presentation about some of the tools for distributed programming in Elixir
Maciej Kaszubowski
July 07, 2018
Tweet
Share
More Decks by Maciej Kaszubowski
See All by Maciej Kaszubowski
Error-free Elixir
mkaszubowski
0
330
Modular Design in Elixir (ElixirConf EU 2019)
mkaszubowski
2
730
The Big Ball of Nouns
mkaszubowski
0
99
Modular Design in Elixir
mkaszubowski
1
380
Our three years with Elixir
mkaszubowski
0
240
Concurrency Basics for Elixir
mkaszubowski
0
120
Software Architecture
mkaszubowski
0
130
Let it crash - fault tolerance in Elixir/OTP
mkaszubowski
0
470
CRDTs - The science behind Phoenix Presence
mkaszubowski
2
260
Other Decks in Programming
See All in Programming
Jakarta EE Meets AI
ivargrimstad
0
540
MCP連携で加速するAI駆動開発/mcp integration accelerates ai-driven-development
bpstudy
0
240
構文解析器入門
ydah
7
2k
Understanding Kotlin Multiplatform
l2hyunwoo
0
250
React は次の10年を生き残れるか:3つのトレンドから考える
oukayuka
41
16k
대규모 트래픽을 처리하는 프론트 개발자의 전략
maryang
0
110
Dart 参戦!!静的型付き言語界の隠れた実力者
kno3a87
0
160
SwiftでMCPサーバーを作ろう!
giginet
PRO
2
210
可変性を制する設計: 構造と振る舞いから考える概念モデリングとその実装
a_suenami
10
1.4k
DynamoDBは怖くない!〜テーブル設計の勘所とテスト戦略〜
hyamazaki
0
150
CIを整備してメンテナンスを生成AIに任せる
hazumirr
0
430
kiroでゲームを作ってみた
iriikeita
0
130
Featured
See All Featured
Gamification - CAS2011
davidbonilla
81
5.4k
The Art of Programming - Codeland 2020
erikaheidi
54
13k
Visualizing Your Data: Incorporating Mongo into Loggly Infrastructure
mongodb
47
9.6k
Why Our Code Smells
bkeepers
PRO
337
57k
Performance Is Good for Brains [We Love Speed 2024]
tammyeverts
10
1k
Being A Developer After 40
akosma
90
590k
The World Runs on Bad Software
bkeepers
PRO
70
11k
The Language of Interfaces
destraynor
158
25k
Sharpening the Axe: The Primacy of Toolmaking
bcantrill
44
2.4k
Easily Structure & Communicate Ideas using Wireframe
afnizarnur
194
16k
Fireside Chat
paigeccino
37
3.6k
No one is an island. Learnings from fostering a developers community.
thoeni
21
3.4k
Transcript
It’s scary out there
Organisational Matters
None
We’re 1 year old!
Summer break (probably)
We’re looking for speakers!
It’s scary out there Distributed Systems in Elixir Poznań Elixir
Meetup #8
None
Pid 1 Pid 2
Pid 1 Pid 2 Node A Node B
The basics
iex --name
[email protected]
--cookie cookie -S mix
Node.connect(:’
[email protected]
')
(DEMO)
#PID<0.94.0>
#PID<0.94.0> node identifier (relative to current node)
#PID<0.94.0> node identifier (relative to current node) 0 =a local
process
#PID<0.94.0> Process id node identifier (relative to current node)
How does it work?
Pid 1 Node A Pid 2 Node B
Pid 1 Node A Pid 2 Node B TCP Connection
send(pid2, msg) Pid 1 Node A Pid 2 Node B
TCP Connection
send(pid2, msg) Pid 1 Node A Pid 2 Node B
destination_node = node(pid) TCP Connection
send(pid2, msg) Pid 1 Node A Pid 2 Node B
destination_node = node(pid) :erlang.term_to_binary(msg) TCP Connection
send(pid2, msg) Pid 1 Node A Pid 2 Node B
destination_node = node(pid) :erlang.term_to_binary(msg) TCP Connection
send(pid2, msg) Pid 1 Node A Pid 2 Node B
destination_node = node(pid) :erlang.term_to_binary(msg) TCP Connection :erlang.binary_to_term(encode)
send(pid2, msg) Pid 1 Node A receive msg Pid 2
Node B destination_node = node(pid) :erlang.term_to_binary(msg) TCP Connection :erlang.binary_to_term(encode)
Distributed Systems?
Distributed Systems? Solved!
Well, not exactly…
Difficulties
Node A Node B
Node A Node B Node C
Node A Node B Node C Node D
None
A lot of messages
us-east-1 us-west-2
8 fallacies of distributed computing
fallacies of distributed computing 1. The network is reliable 2.
Latency is zero 3. Bandwidth is infinite 4. The network is secure 5. Topology doesn’t change 6. The is one administrator 7. Transport cost is zero 8. The network is homogenous
CAP THEOREM
CAP THEOREM us-west-2 us-east-1
CAP THEOREM us-west-2 us-east-1 Set X=5
CAP THEOREM us-west-2 us-east-1 Set X=5 Read X
CAP THEOREM us-west-2 us-east-1 Set X=5 Set X = 7
Consistency or Availability (under network partition)
Consistency or Speed In practice
Guarantees
Pid 1 Pid 2 Pid3 Guarantees m1, m2, m3 m4,
m5, m6 send(pid2, m1) send(pid2, m2) send(pid2, m3) send(pid2, m4) send(pid2, m5) send(pid2, m6)
Pid 1 Pid 2 Pid3 Guarantees m1, m2, m3 m4,
m5, m6 send(pid2, m1) send(pid2, m2) send(pid2, m3) send(pid2, m4) send(pid2, m5) send(pid2, m6) Ordering between two processes is preserved
Pid 1 Pid 2 Pid3 Guarantees m4, m5, m6 send(pid2,
m1) send(pid2, m2) send(pid2, m3) send(pid2, m4) send(pid2, m5) send(pid2, m6) m1, m2, m3 Delivery is not guaranteed
Pid 1 Pid 2 Pid3 Guarantees m1, m2, m3 m4,
m5, m6 send(pid2, m1) send(pid2, m2) send(pid2, m3) send(pid2, m4) send(pid2, m5) send(pid2, m6) Ordering between different processes is not guaranteed
[m1, m2, m3, m4, m5, m6]
[m1, m2, m3, m4, m5, m6] [m4, m5, m6, m1,
m2, m3]
[m1, m2, m3, m4, m5, m6] [m4, m5, m6, m1,
m2, m3] [m1, m4, m2, m5, m3, m6]
[m1, m2, m3, m4, m5, m6] [m4, m5, m6, m1,
m2, m3] [m1, m4, m2, m5, m3, m6] [m1, m2, m3]
[m1, m2, m3, m4, m5, m6] [m4, m5, m6, m1,
m2, m3] [m1, m4, m2, m5, m3, m6] [m1, m2, m3] [m1, m3, m5, m6]
[m1, m2, m3, m4, m5, m6] [m4, m5, m6, m1,
m2, m3] [m1, m4, m2, m5, m3, m6] [m1, m2, m3] [m1, m3, m5, m6] []
[m1, m2, m3, m4, m5, m6] [m4, m5, m6, m1,
m2, m3] [m1, m4, m2, m5, m3, m6] [m1, m2, m3] [m1, m3, m5, m6] [] [m1, m3, m2, m4, m5, m6]
[m1, m2, m3, m4, m5, m6] [m4, m5, m6, m1,
m2, m3] [m1, m4, m2, m5, m3, m6] [m1, m2, m3] [m1, m3, m5, m6] [] [m1, m3, m2, m4, m5, m6] [M3, M3]
Phoenix Request A User Logged In
Phoenix Request A Phoenix Request B User Logged In User
Logged OUT
Phoenix Request A Phoenix Request B User Logged In User
Logged OUT This Can arrive first
Unfortunately, things tend to work fine locally
The Tools
:global
Pid 1 Node A Node B Pid 2
Pid 1 Node A Node B Pid 2 :global.register_name(“global”, self())
Pid 1 Node A Node B Pid 2 :global.register_name(“global”, self())
Register PId1 as “global”
Pid 1 Node A Node B Pid 2 :global.register_name(“global”, self())
Register PId1 as “global” Sure
Pid 1 Node A Node B Pid 2 :global.register_name(“global”, self())
Register PId1 as “global” Sure :global.whereis_name(“global”) = pid1
Pid 1 Node A Node B Pid 2 :global.register_name(“global”, self())
:global.register_name(“global”, self()) ?
(DEMO)
:global • single process registration (if everything works OK) •
Favours availability over consistency • Information stored locally (reading is fast) • Registration is blocking (may be slow)
:PG2
Pid1 Pid3 Pid2 [] [] []
Pid1 Pid3 Pid2 :pg2.create(“my_group”) [] [] []
Pid1 Pid3 Pid2 [] [] [] join join :pg2.join(“my_group”, self()
Pid1 Pid3 Pid2 [] [pid1] [] Monitor Monitor :pg2.join(“my_group”, self()
Pid1 Pid3 Pid2 [pid1] [pid1] [pid1] Monitor Monitor :pg2.join(“my_group”, self()
Pid1 Pid3 Pid2 [pid1] [pid1] [pid1]
Pid1 Pid3 Pid2 :pg2.join(“my_group”, self() [pid1] [pid1, pid2] [pid1]
Pid1 Pid3 Pid2 join :pg2.join(“my_group”, self() join [pid1, pid2] [pid1,
pid2] [pid1, pid2]
Pid1 Pid3 Pid2 [pid1] [pid2] [pid1]
Pid1 Pid3 Pid2 [pid1] [pid2] [pid1]
Pid1 Pid3 Pid2 [pid1] [pid2] [pid1]
Pid1 Pid3 Pid2 [pid1, pid2] [pid1, pid2] [pid1, pid2]
It will heal, but the state in inconsistent for some
time
What does it matter?
Node A Pg2 Pg2 Pg2 Node B Node C
Node A Pg2 Pg2 Pg2 Node B Node C Phoenix
Channels
Node A Pg2 Pg2 Pg2 Node B Node C Phoenix
Presence
Node A Pg2 Pg2 Pg2 Node B Node C Phoenix
Channels
:pg2 • Process groups • Favours availability over consistency •
Information stored locally (reading is fast) • Registration is blocking (may be slow)
Strongly consistent Solutions
Strongly consistent Solutions • Consensus - Raft, Paxos, ZAB •
Two-phase commit/THree-phase commit (2PC/3PC) • Read/Write quorums • Single database as a source of truth
Summary
Distributed Systems
Well, not exactly…
Asynchronous messages Distributed systems are all about
Really, there’s no magic
Just asynchronous messages between nodes
Just asynchronous messages between nodes & node failures
Just asynchronous messages between nodes & node failures & Communication
failures
Just asynchronous messages between nodes & node failures & Communication
failures & Network partitions
Tradeoffs Distributed systems are all about
Where to go next
Worth looking at • Riak_core • RAFT • Two-Phase Commit
(2PC) / Three-Phase Commit (3PC) • CRDTs • LASP and Partisan
Free online (click!) Elixir / Erlang
Free PDF (Click!) Distributed Systems
Theory (The hard stuff)
• https://raft.github.io/ (Raft Consensus) • http://learnyousomeerlang.com/distribunomicon • https://www.rgoarchitects.com/Files/fallacies.pdf (Fallacies of
distributed computing) • https://dzone.com/articles/better-explaining-cap-theorem (CAP Theorem) • https://medium.com/learn-elixir/message-order-and-delivery-guarantees-in-elixir- erlang-9350a3ea7541 (Elixir message delivery guarantees) • https://lasp-lang.readme.io/ (LASP) • https://arxiv.org/pdf/1802.02652.pdf (Partisan Paper) • https://bravenewgeek.com/tag/three-phase-commit/ (3PC)
We’re looking for speakers!
Thank You! Poznań Elixir Meetup #8