really. • Strong Eventual Consistency (SEC) “Replicas that deliver the same updates have equivalent state” • Primary requirement Eventual replica-to-replica communication 8
really. • Strong Eventual Consistency (SEC) “Replicas that deliver the same updates have equivalent state” • Primary requirement Eventual replica-to-replica communication • Order insensitive! (Commutativity) 8
really. • Strong Eventual Consistency (SEC) “Replicas that deliver the same updates have equivalent state” • Primary requirement Eventual replica-to-replica communication • Order insensitive! (Commutativity) • Duplicate insensitive! (Idempotent) 8
• Convergent data structures Primary data abstraction is the CRDT • Enables composition Provides functional composition of CRDTs that preserves the SEC property 25
elements to initial set and update. update(S1, {add, [1,2,3]}), %% Create second set. S2 = declare(set), %% Apply map operation between S1 and S2. map(S1, fun(X) -> X * 2 end, S2).
elements to initial set and update. update(S1, {add, [1,2,3]}), %% Create second set. S2 = declare(set), %% Apply map operation between S1 and S2. map(S1, fun(X) -> X * 2 end, S2).
elements to initial set and update. update(S1, {add, [1,2,3]}), %% Create second set. S2 = declare(set), %% Apply map operation between S1 and S2. map(S1, fun(X) -> X * 2 end, S2).
elements to initial set and update. update(S1, {add, [1,2,3]}), %% Create second set. S2 = declare(set), %% Apply map operation between S1 and S2. map(S1, fun(X) -> X * 2 end, S2).
elements to initial set and update. update(S1, {add, [1,2,3]}), %% Create second set. S2 = declare(set), %% Apply map operation between S1 and S2. map(S1, fun(X) -> X * 2 end, S2).
runtime system that can scale to large numbers of nodes, that is resilient to failures and provides efficient execution • Well-matched to Lattice Processing (Lasp) 34
runtime system that can scale to large numbers of nodes, that is resilient to failures and provides efficient execution • Well-matched to Lattice Processing (Lasp) • Epidemic broadcast mechanisms provide weak ordering but are resilient and efficient 34
runtime system that can scale to large numbers of nodes, that is resilient to failures and provides efficient execution • Well-matched to Lattice Processing (Lasp) • Epidemic broadcast mechanisms provide weak ordering but are resilient and efficient • Lasp’s programming model is tolerant to message re- ordering, disconnections, and node failures 34
runtime system that can scale to large numbers of nodes, that is resilient to failures and provides efficient execution • Well-matched to Lattice Processing (Lasp) • Epidemic broadcast mechanisms provide weak ordering but are resilient and efficient • Lasp’s programming model is tolerant to message re- ordering, disconnections, and node failures • “Selective Receive” Nodes selectively receive and process messages based on interest. 34
in a client-server or peer-to-peer mode • Broadcast (via Gossip, Tree, etc.) Efficient dissemination of both program state and application state via gossip, broadcast tree, or hybrid mode 35
in a client-server or peer-to-peer mode • Broadcast (via Gossip, Tree, etc.) Efficient dissemination of both program state and application state via gossip, broadcast tree, or hybrid mode • Auto-discovery Integration with Mesos, auto-discovery of Lasp nodes for ease of configurability 35
are paid according to a minimum number of impressions • Clients will go offline Clients have limited connectivity and the system still needs to make progress while clients are offline 44
Ad Counter 1 Riot Ad Counter 2 Contracts Ads Contracts Ads With Contracts Riot Ads Rovio Ads Filter Product Read 50,000 Remove Increment Read Union Lasp Operation User-Maintained CRDT Lasp-Maintained CRDT Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Client Side, Single Copy at Client 45
Ad Counter 1 Riot Ad Counter 2 Contracts Ads Contracts Ads With Contracts Riot Ads Rovio Ads Filter Product Read 50,000 Remove Increment Read Union Rovio Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 1 Client 47 Ads Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Riot Ad Counter 2 Contracts Ads Contracts Ads With Contracts Riot Ads Rovio Ads Filter Product Read 50,000 Remove Increment Read Union Lasp Operation User-Maintained CRDT Lasp-Maintained CRDT Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Client Side, Single Copy at Client
Ad ounter 1 Riot Ad ounter 2 Contracts Ads Contracts Ads With Contracts Riot Ads Rovio Ads Filter Product Read 50,000 Remove Increment Read Union Rovio Ad Counter 1 Ro C Rovio Ad Counter 1 Ro C Rovio Ad Counter 1 Ro C Rovio Ad Counter 1 Ro C Client Side, Sing 48 Ads Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Riot Ad Counter 2 Contracts Ads Contracts Ads With Contracts Riot Ads Rovio Ads Filter Product Read 50,000 Remove Increment Read Union Lasp Operation User-Maintained CRDT Lasp-Maintained CRDT Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Client Side, Single Copy at Client
Ads Filter Product move Read Union Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Client Side, Single Copy at Client 49 Ads Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Riot Ad Counter 2 Contracts Ads Contracts Ads With Contracts Riot Ads Rovio Ads Filter Product Read 50,000 Remove Increment Read Union Lasp Operation User-Maintained CRDT Lasp-Maintained CRDT Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Client Side, Single Copy at Client
Union Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Client Side, Single Copy at Client 50 Ads Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Riot Ad Counter 2 Contracts Ads Contracts Ads With Contracts Riot Ads Rovio Ads Filter Product Read 50,000 Remove Increment Read Union Lasp Operation User-Maintained CRDT Lasp-Maintained CRDT Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Client Side, Single Copy at Client
Ad Counter 1 Riot Ad Counter 2 Contracts Ads Contracts Ads With Contracts Riot Ads Rovio Ads Filter Product Read 50,000 Remove Increment Read Union Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Client Side, Single Copy at Client 51 Ads Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Riot Ad Counter 2 Contracts Ads Contracts Ads With Contracts Riot Ads Rovio Ads Filter Product Read 50,000 Remove Increment Read Union Lasp Operation User-Maintained CRDT Lasp-Maintained CRDT Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Client Side, Single Copy at Client
Ad Counter 1 Riot Ad Counter 2 Contracts Ads Contracts Ads With Contracts Riot Ads Rovio Ads Filter Product Read 50,000 Remove Increment Read Union Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Client Side, Single Copy at Client 53 Ads Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Riot Ad Counter 2 Contracts Ads Contracts Ads With Contracts Riot Ads Rovio Ads Filter Product Read 50,000 Remove Increment Read Union Lasp Operation User-Maintained CRDT Lasp-Maintained CRDT Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Rovio Ad Counter 1 Rovio Ad Counter 2 Riot Ad Counter 1 Client Side, Single Copy at Client
all modeled through monotonic state growth • Arbitrary distribution Use of convergent data structures allows computational graph to be arbitrarily distributed 54
all modeled through monotonic state growth • Arbitrary distribution Use of convergent data structures allows computational graph to be arbitrarily distributed • Divergence Divergence is a factor of synchronization period 54
cross-node message passing. • Known scalability limitations Analyzed in academic in various publications. • Single connection Head of line blocking. 57
cross-node message passing. • Known scalability limitations Analyzed in academic in various publications. • Single connection Head of line blocking. • Full membership All-to-all failure detection with heartbeats and timeouts. 57
port Similar to Solaris sunrpc style portmap: known port for mapping to dynamic port-based services. • Bridged networking Problematic for cluster in bridged networking with dynamic port allocation. 58
Entertainment. • Runtime configuration Application controlled through runtime environment variables. • Membership Full membership with Distributed Erlang via EPMD. 59
and Lasp application. • Single EPMD instance per slave Controlled through the use of host networking and HOSTNAME: UNIQUE constraints in Mesos. • Lasp Local execution using host networking: connects to local EPMD. 60
and Lasp application. • Single EPMD instance per slave Controlled through the use of host networking and HOSTNAME: UNIQUE constraints in Mesos. • Lasp Local execution using host networking: connects to local EPMD. • Service Discovery Service discovery facilitated through clustering EPMD instances through Sprinter. 60
evaluation with Mesos. • Threads (Processes) Low node count in evaluation required node simulation through local threads to increase concurrency. • Transparent Migration Desired transparent migration of experiments to AWS based Mesos deployment: problematic with IP/port assignment and clustering. 61
evaluation with Mesos. • Threads (Processes) Low node count in evaluation required node simulation through local threads to increase concurrency. • Transparent Migration Desired transparent migration of experiments to AWS based Mesos deployment: problematic with IP/port assignment and clustering. • Adapted via Environment Adapted the deployment based on detection of whether it was cloud vs. local deployment. 61
too much work needed to be adapted for things to work correctly. • Single orchestration task Dispatched events, controlled when to start and stop the evaluation and performed log aggregation. 64
too much work needed to be adapted for things to work correctly. • Single orchestration task Dispatched events, controlled when to start and stop the evaluation and performed log aggregation. • Bottleneck Events immediately dispatched: would require blocking for processing acknowledgment. 64
too much work needed to be adapted for things to work correctly. • Single orchestration task Dispatched events, controlled when to start and stop the evaluation and performed log aggregation. • Bottleneck Events immediately dispatched: would require blocking for processing acknowledgment. • Unrealistic Events do not queue up all at once for processing by the client. 64
of memory. • Weeks spent adding instrumentation Process level, VM level, Erlang Observer instrumentation to identify heavy CPU and memory processes. 65
of memory. • Weeks spent adding instrumentation Process level, VM level, Erlang Observer instrumentation to identify heavy CPU and memory processes. • Dissemination too expensive 1000 threads to a single dissemination process (one Mesos task) leads to backed up message queues and memory leaks. 65
of memory. • Weeks spent adding instrumentation Process level, VM level, Erlang Observer instrumentation to identify heavy CPU and memory processes. • Dissemination too expensive 1000 threads to a single dissemination process (one Mesos task) leads to backed up message queues and memory leaks. • Unrealistic Two different dissemination mechanisms: thread to thread and node to node: one is synthetic. 65
EPMD during execution. • Lost connection EPMD loses connections with nodes for some arbitrary reason. • EPMD task restarted by Mesos Restarted for an unknown reason, which leads Lasp instances to restart in their own container. 66
5 GiB of state within 90 seconds. • Delta dissemination Delta dissemination only provides around a 30% decrease in state transmission. • Unbounded queues Message buffers would lead to VMs crashing because of large memory consumption. 67
are: what they quantify and how to properly provision tasks using them. • Random task failures Impossible to debug when instances are “Killed”; mostly the OOM-Killer as we learned. 68
are: what they quantify and how to properly provision tasks using them. • Random task failures Impossible to debug when instances are “Killed”; mostly the OOM-Killer as we learned. • Log rolling UI doesn’t handle log rolling when debugging; CLI needs to be restarted in some cases for log rolling. 68
are: what they quantify and how to properly provision tasks using them. • Random task failures Impossible to debug when instances are “Killed”; mostly the OOM-Killer as we learned. • Log rolling UI doesn’t handle log rolling when debugging; CLI needs to be restarted in some cases for log rolling. • Docker containerizer Seemed very immature at the time; difficult to debug or have visibility into processes running in Docker. 68
service with abstract interface initially on EPMD and later migrate after tested. • Adapt Lasp and Broadcast layer Integrate pluggable membership service throughout the stack and librate existing libraries from distributed Erlang. 70
service with abstract interface initially on EPMD and later migrate after tested. • Adapt Lasp and Broadcast layer Integrate pluggable membership service throughout the stack and librate existing libraries from distributed Erlang. • Build service discovery mechanism Mechanize node discovery outside of EPMD based on new membership service. 70
configuration of protocols used for cluster membership. • Several protocol implementations: • Full membership via EPMD. • Full membership via TCP. • Client-server membership via TCP. 71
configuration of protocols used for cluster membership. • Several protocol implementations: • Full membership via EPMD. • Full membership via TCP. • Client-server membership via TCP. • Peer-to-peer membership via TCP (with HyParView) 71
configuration of protocols used for cluster membership. • Several protocol implementations: • Full membership via EPMD. • Full membership via TCP. • Client-server membership via TCP. • Peer-to-peer membership via TCP (with HyParView) • Visualization Provide a force-directed graph-based visualization engine for cluster debugging in real-time. 71
have full visibility into the entire graph. • Failure detection Performed by peer-to-peer heartbeat messages with a timeout. • Limited scalability Heartbeat interval increases when node count increases leading to false or delayed detection. 72
have full visibility into the entire graph. • Failure detection Performed by peer-to-peer heartbeat messages with a timeout. • Limited scalability Heartbeat interval increases when node count increases leading to false or delayed detection. • Testing Used to create the initial test suite for Partisan. 72
in the system as peers; client has only the server as a peer. • Failure detection Nodes heartbeat with timeout all peers they are aware of. • Limited scalability Single point of failure: server; with limited scalability on visibility. 73
in the system as peers; client has only the server as a peer. • Failure detection Nodes heartbeat with timeout all peers they are aware of. • Limited scalability Single point of failure: server; with limited scalability on visibility. • Testing Used for baseline evaluations as “reference” architecture. 73
(fixed) and passive (log n); passive used for failure replacement with active view. • Failure detection Performed by monitoring active TCP connections to peers with keep-alive enabled. 74
(fixed) and passive (log n); passive used for failure replacement with active view. • Failure detection Performed by monitoring active TCP connections to peers with keep-alive enabled. • Very scalable (10k+ nodes during academic evaluation) However, probabilistic; potentially leads to isolated nodes during churn. 74
the overlay network to reduce message redundancy. • Hybrid approach Ideal: tree-based dissemination: flood other nodes with metadata information used to repair the tree during network partitions. 75
the overlay network to reduce message redundancy. • Hybrid approach Ideal: tree-based dissemination: flood other nodes with metadata information used to repair the tree during network partitions. • Configurable at runtime Possible to enable or disable at runtime without application changes: optimization only. 75
environment for any of the previously discussed models. • Dissemination Layer Also configurable: client/server, tree or not, causal- based or not for efficiency. 76
environment for any of the previously discussed models. • Dissemination Layer Also configurable: client/server, tree or not, causal- based or not for efficiency. • No Application Modifications Only affect runtime performance; do not require rewriting the application as it’s safest in the weakest mode. 76
to cluster all nodes and ensure connected overlay network: reads information from Marathon. • Node local Operates at each node and is responsible for taking actions to ensure connected graph: required for probabilistic protocols. 77
to cluster all nodes and ensure connected overlay network: reads information from Marathon. • Node local Operates at each node and is responsible for taking actions to ensure connected graph: required for probabilistic protocols. • Membership mode specific Knows, based on the membership mode, how to properly cluster nodes and enforces proper join behaviour. 77
view for analysis. • Elected node (or group) analyses Periodically analyses the information in S3 for the following: • Isolated node detection Identifies isolated nodes and takes corrective measures to repair the overlay. 78
view for analysis. • Elected node (or group) analyses Periodically analyses the information in S3 for the following: • Isolated node detection Identifies isolated nodes and takes corrective measures to repair the overlay. • Verifies symmetric relationship Ensures that if a node knows about another node, the relationship is symmetric: prevents I know you, but you don’t know me. 78
view for analysis. • Elected node (or group) analyses Periodically analyses the information in S3 for the following: • Isolated node detection Identifies isolated nodes and takes corrective measures to repair the overlay. • Verifies symmetric relationship Ensures that if a node knows about another node, the relationship is symmetric: prevents I know you, but you don’t know me. • Periodic alerting Alerts regarding disconnected graphs so external measures can be taken, if necessary. 78
a cluster of node and configure simulations at runtime. • Each simulation: • Different application scenario Uniquely execute a different application scenario at runtime based on runtime configuration. 80
a cluster of node and configure simulations at runtime. • Each simulation: • Different application scenario Uniquely execute a different application scenario at runtime based on runtime configuration. • Result aggregation Aggregate results at end of execution and archive these results. 80
a cluster of node and configure simulations at runtime. • Each simulation: • Different application scenario Uniquely execute a different application scenario at runtime based on runtime configuration. • Result aggregation Aggregate results at end of execution and archive these results. • Plot generation Automatically generate plots for the execution and aggregate the results of multiple executions. 80
a cluster of node and configure simulations at runtime. • Each simulation: • Different application scenario Uniquely execute a different application scenario at runtime based on runtime configuration. • Result aggregation Aggregate results at end of execution and archive these results. • Plot generation Automatically generate plots for the execution and aggregate the results of multiple executions. • Minimal coordination Work must be performed with minimal coordination, as a single orchestrator is a scalability bottleneck for large applications. 80
the desired topology based on information derived from Marathon (ports, IPs, etc.) • Ensure connectivity Nodes may die and be restarted during execution, so we should ensure that the graph stays connected during execution and new nodes are added. 81
desired behavior of each task. • Environment Environment should be used to derive the behavior of each of the tasks during the execution. • Event Generation Nodes should generate their own events: there should be no requirement of a central event executor: each node contains it’s own synthetic workload. 82
desired behavior of each task. • Environment Environment should be used to derive the behavior of each of the tasks during the execution. • Event Generation Nodes should generate their own events: there should be no requirement of a central event executor: each node contains it’s own synthetic workload. • Instrumentation Nodes instrument and log their own events for later log aggregation of results. 82
containing counters that each node manipulates. • Simulates a workflow Nodes use this operation to simulate a workflow for the experiment. • Event Generation Event generation toggles a boolean for the node to show completion. 83
containing counters that each node manipulates. • Simulates a workflow Nodes use this operation to simulate a workflow for the experiment. • Event Generation Event generation toggles a boolean for the node to show completion. • Log Aggregation Completion triggers log aggregation. 83
containing counters that each node manipulates. • Simulates a workflow Nodes use this operation to simulate a workflow for the experiment. • Event Generation Event generation toggles a boolean for the node to show completion. • Log Aggregation Completion triggers log aggregation. • Shutdown Upon log aggregation completion, nodes shutdown. 83
containing counters that each node manipulates. • Simulates a workflow Nodes use this operation to simulate a workflow for the experiment. • Event Generation Event generation toggles a boolean for the node to show completion. • Log Aggregation Completion triggers log aggregation. • Shutdown Upon log aggregation completion, nodes shutdown. • External monitoring When events complete execution, nodes automatically begin the next experiment. 83
log aggregation and generation of plots from log data • Reproducible for “verified artifacts” Deterministic regeneration of log data into gnuplot artifacts and permanent archival. 84
log aggregation and generation of plots from log data • Reproducible for “verified artifacts” Deterministic regeneration of log data into gnuplot artifacts and permanent archival. • Repeated push/pull operation Given it’s GIT based, repeated push/pull/rebase is required to push logs: expensive, but works for now. 84
you exceed a few nodes: message queues, memory, delays. • Partial Views Required: rely on transitive dissemination of information and partial network knowledge. 86
you exceed a few nodes: message queues, memory, delays. • Partial Views Required: rely on transitive dissemination of information and partial network knowledge. • Results Reduced Lasp memory footprint to 75MB; larger in practice for debugging. 86
mechanism: random promotion of isolated nodes; mainly issues of symmetry. • FIFO across connections Not per connection, but protocol assumes across all connections leading to false disconnects. 87
mechanism: random promotion of isolated nodes; mainly issues of symmetry. • FIFO across connections Not per connection, but protocol assumes across all connections leading to false disconnects. • Unrealistic system model You need per message acknowledgements for safety. 87
mechanism: random promotion of isolated nodes; mainly issues of symmetry. • FIFO across connections Not per connection, but protocol assumes across all connections leading to false disconnects. • Unrealistic system model You need per message acknowledgements for safety. • Pluggable protocol helps debugging Being able to switch to full membership or client-server assists in debugging protocol vs. application problems. 87
Build and evaluate nodes reproducibly at 500 node cluster sizes (possible at 1000 tasks over 140 nodes) • Limited financially and by Amazon Harder to run larger evaluations because we’re limited financially (as a university) and because of Amazon limits. 88
Build and evaluate nodes reproducibly at 500 node cluster sizes (possible at 1000 tasks over 140 nodes) • Limited financially and by Amazon Harder to run larger evaluations because we’re limited financially (as a university) and because of Amazon limits. • Mean state reduction per client Around 100x improvement from our PaPoC 2016 initial evaluation results. 88
your cluster: all of these things lead to easier debugging. • Control changes No Lasp PR accepted without divergence, state transmission, and overhead graphs. 89
your cluster: all of these things lead to easier debugging. • Control changes No Lasp PR accepted without divergence, state transmission, and overhead graphs. • Automation Developers use graphs when they are easy to make: lower the difficulty for generation and understand how changes alter system behaviour. 89
your cluster: all of these things lead to easier debugging. • Control changes No Lasp PR accepted without divergence, state transmission, and overhead graphs. • Automation Developers use graphs when they are easy to make: lower the difficulty for generation and understand how changes alter system behaviour. • Make work easily testable When you test locally and deploy globally, you need to make things easy to test, deploy and evaluate (for good science, I say!) 89