Upgrade to Pro — share decks privately, control downloads, hide ads and more …

The Road to Summingbird: Stream Processing at (...

Sam Ritchie
January 11, 2014

The Road to Summingbird: Stream Processing at (Every) Scale

Twitter's Summingbird library allows developers and data scientists to build massive streaming MapReduce pipelines without worrying about the usual mess of systems issues that come with realtime systems at scale.

But what if your project is not quite at "scale" yet? Should you ignore scale until it becomes a problem, or swallow the pill ahead of time? Is using Summingbird overkill for small projects? I argue that it's not. This talk will discuss the ideas and components of Summingbird that you could, and SHOULD, use in your startup's code from day one. You'll come away with a new appreciation for monoids and semigroups and a thirst for abstract algebra.

Sam Ritchie

January 11, 2014
Tweet

More Decks by Sam Ritchie

Other Decks in Programming

Transcript

  1. THE ROAD TO SUMMINGBIRD Sam Ritchie :: @sritchie :: Data

    Day Texas 2014 Stream Processing at (Every) Scale
  2. • Logging and Monitoring in the Small • Scaling toward

    Summingbird - Tooling Overview AGENDA
  3. • Logging and Monitoring in the Small • Scaling toward

    Summingbird - Tooling Overview • What breaks at full scale? AGENDA
  4. • Logging and Monitoring in the Small • Scaling toward

    Summingbird - Tooling Overview • What breaks at full scale? • Summingbird’s Constraints, how they can help AGENDA
  5. • Logging and Monitoring in the Small • Scaling toward

    Summingbird - Tooling Overview • What breaks at full scale? • Summingbird’s Constraints, how they can help • Lessons Learned AGENDA
  6. • Application “Events” WHAT TO MONITOR? • on certain events

    or patterns • Extract metrics from the event stream
  7. • Application “Events” WHAT TO MONITOR? • on certain events

    or patterns • Extract metrics from the event stream • Dashboards?
  8. LOG STATEMENTS (defn create-user! [username] (log/info "User Created: " username)

    (db/create {:type :user :name username :timestamp (System/currentTimeMillis)}))
  9. • Ability to REACT to system events • Long-term storage

    via S3 • Searchable Logs WHAT DO YOU GET?
  10. • How many users per day? • How many times

    did this exception show up vs that? WHAT’S MISSING?
  11. • How many users per day? • How many times

    did this exception show up vs that? • Was this the first time I’ve seen that error? WHAT’S MISSING?
  12. • How many users per day? • How many times

    did this exception show up vs that? • Was this the first time I’ve seen that error? • Pattern Analysis requires Aggregations WHAT’S MISSING?
  13. IMPOSE STRUCTURE (log/info "User Created: " username) (log/info {:event "user_creation"

    :name "sritchie" :timestamp (now) :request-id request-id})
  14. • FluentD (http://fluentd.org/) • Riemann (http://riemann.io/) • Splunk (http://www.splunk.com/) •

    Simmer (https://github.com/avibryant/simmer) • StatsD + CollectD (https://github.com/etsy/statsd/) EVENT PROCESSORS
  15. • FluentD (http://fluentd.org/) • Riemann (http://riemann.io/) • Splunk (http://www.splunk.com/) •

    Simmer (https://github.com/avibryant/simmer) • StatsD + CollectD (https://github.com/etsy/statsd/) • Esper (http://esper.codehaus.org/) EVENT PROCESSORS
  16. • Kafka (https://kafka.apache.org/) • LogStash (http://logstash.net/) • Flume (http://flume.apache.org/) •

    Kinesis (http://aws.amazon.com/kinesis/) • Scribe (https://github.com/facebook/scribe) LOG COLLECTION
  17. - Declarative Streaming Map/Reduce DSL - Realtime platform that runs

    on Storm. - Batch platform that runs on Hadoop. - Batch / Realtime Hybrid platform What is Summingbird?
  18. val impressionCounts = impressionHose.flatMap(extractCounts(_)) val engagementCounts = engagementHose.filter(_.isValid) .flatMap(engagementCounts(_)) val

    totalCounts = (impressionCounts ++ engagementCounts) .flatMap(fanoutByTime(_)) .sumByKey(onlineStore) val stormTopology = Storm.remote("stormName").plan(totalCounts) val hadoopJob = Scalding("scaldingName").plan(totalCounts)
  19. MAP/REDUCE f1 f1 f2 f2 f2 + + + +

    + Event Stream 1 Event Stream 2 FlatMappers Reducers Storage (Memcache / ElephantDB)
  20. // Views per URL Tweeted (URL, Int) // Unique Users

    per URL Tweeted (URL, Set[UserID])
  21. // Views per URL Tweeted (URL, Int) // Unique Users

    per URL Tweeted (URL, Set[UserID]) // Views AND Unique Users per URL (URL, (Int, Set[UserID]))
  22. // Views per URL Tweeted (URL, Int) // Unique Users

    per URL Tweeted (URL, Set[UserID]) // Views, Unique Users + Top-K Users (URL, (Int, Set[UserID], TopK[(User, Count)])) // Views AND Unique Users per URL (URL, (Int, Set[UserID]))
  23. ;; 7 steps a0 + a1 + a2 + a3

    + a4 + a5 + a6 + a7
  24. ;; 5 steps (+ (+ a0 a1) (+ a2 a3)

    (+ a4 a5) (+ a6 a7))
  25. ;; 3 steps (+ (+ (+ a0 a1) (+ a2

    a3)) (+ (+ a4 a5) (+ a6 a7)))
  26. BATCH / REALTIME 0 1 2 3 fault tolerant: Noisy:

    Realtime sums from 0, each batch Log Hadoop Hadoop Hadoop Log Log Log RT RT RT RT BatchID:
  27. BATCH / REALTIME 0 1 2 3 fault tolerant: Noisy:

    Log Hadoop Hadoop Hadoop Log Log Log RT RT RT RT Hadoop keeps a total sum (reliably) BatchID:
  28. BATCH / REALTIME 0 1 2 3 fault tolerant: Noisy:

    Log Hadoop Hadoop Hadoop Log Log Log RT RT RT RT Sum of RT Batch(i) + Hadoop Batch(i-1) has bounded noise, bounded read/write size BatchID:
  29. Approximate Maps - We would probably be okay if for

    each Key we could get an approximate Value. - We might not need to enumerate all resulting keys; perhaps only keys with large values would do.
  30. W D

  31. Count-Min Sketch is an Approximate Map - Each K is

    hashed to d values from [0 to w-1] - sum into those buckets - Result is min of all buckets. - Result is an upper bound on true value. - With prob > (1 - delta), error is at most eps * Total Count - w = 1 / eps, d = log(1/delta) - total cost in memory O(w * d)
  32. f f f + + + + + Tweets (Flat)Mappers

    Reducers HDFS/Queue HDFS/Queue reduce: (x,y) => MapMonoid groupBy TweetID (TweetID, Map[URL, Long])
  33. Brief Explanation This job creates two types of keys: 1:

    ((TweetId, TimeBucket) => CMS[URL, Impressions]) 2: TimeBucket => CMS[TweetId, Impressions]
  34. - Akka, Spark, Tez Platforms - More Monoids - Pluggable

    graph optimizations - Auto-tuning Realtime Topologies Future Plans
  35. TAKEAWAYS • Scale - Fake it ‘til you Make It

    • Structured Logging • Include timestamps EVERYWHERE
  36. TAKEAWAYS • Scale - Fake it ‘til you Make It

    • Structured Logging • Include timestamps EVERYWHERE • Record your Schemas