Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Reactive in Reverse, a guest talk by Daniel Spi...

Reactive in Reverse, a guest talk by Daniel Spiewak

The New York Times Developers invited Daniel Spiewak to give a guest talk at on reactive programming at TimesOpen, our public developer event series. Daniel is a software developer based out of Boulder, CO. Over the years, he has worked with Java, Scala, Ruby, C/C++, ML, Clojure and several experimental languages. He currently spends most of his free time researching parser theory and methodologies, particularly areas where the field intersects with functional language design, domain-specific languages and type theory.

The New York Times Developers

September 10, 2014
Tweet

More Decks by The New York Times Developers

Other Decks in Programming

Transcript

  1. Pull vs Push • "Reactive" streams • Java 8 streams

    • Akka streams • "Coreactive" streams • Haskell's Kmett's Machines • scalaz-stream
  2. Pull vs Push • "Reactive" streams • Java 8 streams

    • Akka streams • "Coreactive" streams • Haskell's Kmett's Machines • scalaz-stream
  3. • Push streams • Data assertively pushed into your flow

    • Multi-output trivial; multi-input hard Pull vs Push
  4. • Push streams • Data assertively pushed into your flow

    • Multi-output trivial; multi-input hard • Pull streams • "Turn the crank" from the end and request data • Multi-output hard; multi-input trivial Pull vs Push
  5. • Push streams • Backpressure is something you need to

    design • More intuitive control flow (imperatively) Pull vs Push
  6. • Push streams • Backpressure is something you need to

    design • More intuitive control flow (imperatively) • Pull streams • Backpressure is trivial (it "just works") • More declarative control, which can be weird Pull vs Push
  7. Concepts • Task[A] • Like Future, but more controlled •

    Process[Task, A] • A strict sequence of actions
  8. Concepts: Task • Fully lazy • Creating a Future executes

    immediately • No more memory leaks!
  9. Concepts: Task • Fully lazy • Creating a Future executes

    immediately • No more memory leaks! • Easy to move tasks between thread pools
  10. Concepts: Task • Fully lazy • Creating a Future executes

    immediately • No more memory leaks! • Easy to move tasks between thread pools • Better thread utilization
  11. Concepts: Task • Fully lazy • Creating a Future executes

    immediately • No more memory leaks! • Easy to move tasks between thread pools • Better thread utilization • Explicit parallelism
  12. def fib(n: Int): Task[Int] = n match { case 0

    | 1 => Task now 1 case n => { for { x <- fib(n - 1) y <- fib(n - 2) } yield x + y } } fib(42).run
  13. def fib(n: Int): Task[Int] = n match { case 0

    | 1 => Task now 1 case n => { val ND = Nondeterminism[Task] for { pair <- ND.both(fib(n - 1), fib(n - 2)) (x, y) = pair } yield x + y } } fib(42).run
  14. def futureToTask[A](f: Future[A]): Task[A] = { Task async { cb

    => f onComplete { case Success(v) => cb(\/.right(v)) case Failure(e) => cb(\/.left(v)) } } }
  15. def futureToTask[A](f: Future[A]): Task[A] = { Task async { cb

    => f onComplete { case Success(v) => cb(\/.right(v)) case Failure(e) => cb(\/.left(v)) } } }
  16. Concepts: Process • An ordered sequence of actions • Ask

    for an action…then the next…then the next • If you can't keep up, you ask less frequently • Easy to merge (just ask for data from either "side") • Explicit parallelism
  17. def fetchUrl(num: Int): Task[String] = { val fetch: Task[Task[String]] =

    Task delay { val svc = url(s"http://api.stuff.com/record/$num") Task fork futureToTask(Http(svc OK as.String)) } fetch.join }
  18. val nums: Process[Task, Int] = Process.range(0, 10) val adjusted =

    nums map { _ * 2 } filter { _ < 10 } val pages = adjusted flatMap { num => Process.eval(fetchUrl(num)) } val found = pages find { _ contains "Waldo!" } val stuff: Task[Unit] = found to io.stdOutLines run stuff.run
  19. val nums1: Process[Task, Int] = Process.range(0, 10) val nums2: Process[Task,

    Int] = Process.range(11, 20) val nums: Process[Task, Int] = nums1 interleave nums2 ...
  20. val i = new AtomicInteger val read = Task delay

    { i.getAndIncrement() } val src = Process.eval(read).repeat val left = src map { i => s"left: $i" } val right = src map { i => s"right: $i" } left interleave right to io.stdOutLines
  21. left: 0 right: 1 left: 2 right: 3 left: 4

    right: 5 left: 6 right: 7 left: 8 right: 9 left: 10 right: 11 left: 12 right: 13 ...
  22. val queue = new ArrayBlockingQueue[Message](10) // looks like I'm a

    wimp val read: Task[Message] = Task delay { queue.take() } val src: Process[Task, Message] = Process.eval(read).repeat ... // bounded queues are for wimps...
  23. Sinks • Data has to go somewhere • Writing out

    to a channel • Writing to disk
  24. Sinks • Data has to go somewhere • Writing out

    to a channel • Writing to disk • …or all of the above
  25. Sinks • Data has to go somewhere • Writing out

    to a channel • Writing to disk • …or all of the above • What is a sink anyway?
  26. Sinks • Data has to go somewhere • Writing out

    to a channel • Writing to disk • …or all of the above • What is a sink anyway? • A stream of functions!
  27. def write(str: String): Task[Unit] = Task delay { println(str) }

    val sink: Sink[Task, String] = Process.constant(write _) val src = Process.range(0, 10) map { _.toString } val results = src zip sink flatMap { case (str, f) => Process eval f(str) } val universe: Task[Unit] = results.run
  28. val stdOut: Sink[Task, String] = ... val channel: Sink[Task, String]

    = ... val src = Process.range(0, 10) map { _.toString } val results = src zip stdOut zip channel flatMap { case ((str, f1), f2) => { for { _ <- Process eval f1(str) _ <- Process eval f2(str) } yield () } } val universe: Task[Unit] = results.run
  29. val stdOut: Sink[Task, String] = ... val channel: Sink[Task, String]

    = ... val src = Process.range(0, 10) map { _.toString } val results = src observe stdOut to channel val universe: Task[Unit] = results.run
  30. Concurrency • Always explicit! • Two forms of parallelism •

    Racing two streams into one • Turning a stream "sideways"
  31. Concurrency • Always explicit! • Two forms of parallelism •

    Racing two streams into one • Turning a stream "sideways" • Almost everything implemented on top of wye
  32. wye

  33. val left: Process[Task, Message] = ... val right: Process[Task, Message]

    = ... val merged: Process[Task, Message] = left.wye(right)(wye.merge)
  34. val left: Process[Task, Message] = ... val right: Process[Task, Message]

    = ... val merged: Process[Task, Message] = left merge right // should be "race"
  35. val left: Process[Task, Message] = ... val right: Process[Task, Line]

    = ... // oh NOES! teh symbols cometh! val merged: Process[Task, Message \/ Line] = left either right
  36. val nums: Process[Task, Int] = Process.range(0, 10) val adjusted =

    nums map { _ * 2 } filter { _ < 10 } val pages = adjusted flatMap { num => Process.eval(fetchUrl(num)) }
  37. val nums: Process[Task, Int] = Process.range(0, 10) val adjusted =

    nums map { _ * 2 } filter { _ < 10 } val pages: Process[Task, Task[String]] = adjusted map { num => fetchUrl(num) } val parallel: Process[Task, String] = pages.gather(4)
  38. gather(n) • Grabs chunks of n and parallelizes • Last

    chunk of stream may be truncated • Great for finite streams!
  39. gather(n) • Grabs chunks of n and parallelizes • Last

    chunk of stream may be truncated • Great for finite streams! • Causes DEADLOCK on infinite streams
  40. gather(n) • Grabs chunks of n and parallelizes • Last

    chunk of stream may be truncated • Great for finite streams! • Causes DEADLOCK on infinite streams • Don't use if you source from a queue!
  41. val nums: Process[Task, Int] = Process.range(0, 10) val adjusted =

    nums map { _ * 2 } filter { _ < 10 } val pages: Process[Task, Process[Task, String]] = adjusted map { num => Process.eval(fetchUrl(num)) } val parallel: Process[Task, String] = merge.mergeN(pages)
  42. merge.mergeN • A little weirder to use… • Process of

    Process • Uses a variable bounded queue
  43. merge.mergeN • A little weirder to use… • Process of

    Process • Uses a variable bounded queue • Races all input streams
  44. merge.mergeN • A little weirder to use… • Process of

    Process • Uses a variable bounded queue • Races all input streams • Up to n at a time
  45. merge.mergeN • A little weirder to use… • Process of

    Process • Uses a variable bounded queue • Races all input streams • Up to n at a time • Almost always what you really want
  46. Chat Server • Uses scalaz-netty project • Currently closed-source, but

    OSS soon™! • Would also work with scalaz-nio
  47. Chat Server • Uses scalaz-netty project • Currently closed-source, but

    OSS soon™! • Would also work with scalaz-nio • Uses scodec
  48. Chat Server • Uses scalaz-netty project • Currently closed-source, but

    OSS soon™! • Would also work with scalaz-nio • Uses scodec • Use this. Use it. It's amazing.
  49. Chat Server • Uses scalaz-netty project • Currently closed-source, but

    OSS soon™! • Would also work with scalaz-nio • Uses scodec • Use this. Use it. It's amazing. • Demonstrates the power of Process abstraction
  50. Server • Accept connections asynchronously • …and in parallel! •

    Pipe inbound data to a relay queue • Pipe relay queue into the outbound channel • …including all history!
  51. Server • Accept connections asynchronously • …and in parallel! •

    Pipe inbound data to a relay queue • Pipe relay queue into the outbound channel • …including all history! • Continue until client closes connection
  52. val address: InetSocketAddress = ??? val relay = async.topic[BitVector] val

    handlers = Netty server address map { client => for { Exchange(src, sink) <- client in = src to relay.publish out = relay.subscribe to sink _ <- in merge out } yield () } val server: Task[Unit] = merge.mergeN(handlers).run
  53. Client • Establish connection • Pipe standard input to the

    server (as UTF-8) • Pipe server response to standard output
  54. Client • Establish connection • Pipe standard input to the

    server (as UTF-8) • Pipe server response to standard output • Continue until user fail-sauce Ctrl-C kills us
  55. implicit val codec: Codec[String] = utf8 def transcode(ex: Exchange[BitVector, BitVector])

    = { val decoder = decode.many[String] val encoder = encode.many[String] val Exchange(src, sink) = ex val src2 = src flatMap decoder.decode val sink2 = sink pipeIn encoder.encoder Exchange(src2, sink2) }
  56. val clientP = for { rawData <- Netty connect address

    Exchange(src, sink) = transcode(rawData) in = src to io.stdOutLines out = io.stdInLines to sink _ <- in merge out } yield () val client: Task[Unit] = clientP.run
  57. Notes • Resources are managed and cannot leak • Logic

    is pure and encapsulated from networking
  58. Notes • Resources are managed and cannot leak • Logic

    is pure and encapsulated from networking • Backpressure "just works" (sort of)
  59. Notes • Resources are managed and cannot leak • Logic

    is pure and encapsulated from networking • Backpressure "just works" (sort of) • Our Topic is unbounded, because I'm lazy
  60. Notes • Resources are managed and cannot leak • Logic

    is pure and encapsulated from networking • Backpressure "just works" (sort of) • Our Topic is unbounded, because I'm lazy • Handshaking would be almost trivial
  61. Notes • Resources are managed and cannot leak • Logic

    is pure and encapsulated from networking • Backpressure "just works" (sort of) • Our Topic is unbounded, because I'm lazy • Handshaking would be almost trivial • Client and server logic looks almost the same!
  62. • A different take on "reactive" • Purity helps us

    understand complex logic! • No more puzzling about state or resource leaks
  63. • A different take on "reactive" • Purity helps us

    understand complex logic! • No more puzzling about state or resource leaks • Simple and easy combinators scale well
  64. • A different take on "reactive" • Purity helps us

    understand complex logic! • No more puzzling about state or resource leaks • Simple and easy combinators scale well • You know almost everything you need