Big data... big mess? Without a flexible and proven platform design up front there is the risk of a mess of point-to-point feeds. The solution to this is Apache Kafka, which enables stream or batch consumption of the data by multiple consumers. Implemented as part of Oracle's big data architecture, it acts as a flexible and scalable data bus for the enterprise. This session introduces the concepts of Kafka and a distributed stream platform, and explains how it fits within the big data architecture. See it used with Oracle GoldenGate to stream data into the data reservoir, as well as ad hoc population of discovery lab environments, microservices, and real-time search.