Upgrade to PRO for Only $50/Year—Limited-Time Offer! 🔥
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Apache Kafka
Search
Eko Kurniawan Khannedy
August 30, 2017
Technology
1
4.4k
Apache Kafka
JVM Meetup #5 - Apache Kafka at Blibli.com
Eko Kurniawan Khannedy
August 30, 2017
Tweet
Share
More Decks by Eko Kurniawan Khannedy
See All by Eko Kurniawan Khannedy
Monolith to Event-Driven Microservices
khannedy
1
260
Refactoring
khannedy
0
340
Multi-Datacenter Kafka at Blibli.com
khannedy
2
1.5k
QA Tools - Research and Development
khannedy
0
290
Reactive Puzzle
khannedy
0
210
Event-Driven Architecture
khannedy
1
2k
Resilience Engineering with Hystrix and Spring
khannedy
1
570
Mocking for Unit Test using Mockito
khannedy
1
340
Centralized Configuration using Consul and Spring Cloud
khannedy
2
710
Other Decks in Technology
See All in Technology
学習データって増やせばいいんですか?
ftakahashi
1
260
Kiro Autonomous AgentとKiro Powers の紹介 / kiro-autonomous-agent-and-powers
tomoki10
0
320
Reinforcement Fine-tuning 基礎〜実践まで
ch6noota
0
160
チーリンについて
hirotomotaguchi
3
1.2k
多様なデジタルアイデンティティを攻撃からどうやって守るのか / 20251212
ayokura
0
320
意外とあった SQL Server 関連アップデート + Database Savings Plans
stknohg
PRO
0
290
A Compass of Thought: Guiding the Future of Test Automation ( #jassttokai25 , #jassttokai )
teyamagu
PRO
1
250
Challenging Hardware Contests with Zephyr and Lessons Learned
iotengineer22
0
130
SSO方式とJumpアカウント方式の比較と設計方針
yuobayashi
7
510
[JAWS-UG 横浜支部 #91]DevOps Agent vs CloudWatch Investigations -比較と実践-
sh_fk2
1
240
Edge AI Performance on Zephyr Pico vs. Pico 2
iotengineer22
0
110
LLM-Readyなデータ基盤を高速に構築するためのアジャイルデータモデリングの実例
kashira
0
210
Featured
See All Featured
StorybookのUI Testing Handbookを読んだ
zakiyama
31
6.4k
ピンチをチャンスに:未来をつくるプロダクトロードマップ #pmconf2020
aki_iinuma
128
54k
[SF Ruby Conf 2025] Rails X
palkan
0
500
"I'm Feeling Lucky" - Building Great Search Experiences for Today's Users (#IAC19)
danielanewman
231
22k
Rails Girls Zürich Keynote
gr2m
95
14k
個人開発の失敗を避けるイケてる考え方 / tips for indie hackers
panda_program
122
21k
Intergalactic Javascript Robots from Outer Space
tanoku
273
27k
How to Create Impact in a Changing Tech Landscape [PerfNow 2023]
tammyeverts
55
3.1k
Building Better People: How to give real-time feedback that sticks.
wjessup
370
20k
No one is an island. Learnings from fostering a developers community.
thoeni
21
3.5k
Exploring the Power of Turbo Streams & Action Cable | RailsConf2023
kevinliebholz
36
6.2k
What’s in a name? Adding method to the madness
productmarketing
PRO
24
3.8k
Transcript
APACHE KAFKA EKO KURNIAWAN KHANNEDY
APACHE KAFKA EKO KURNIAWAN KHANNEDY ▸ Principal Software Development Engineer
at Blibli.com ▸ Part of RnD Team at Blibli.com ▸
[email protected]
APACHE KAFKA AGENDA ▸ Kafka Intro ▸ Kafka Internals ▸
Installing Kafka ▸ Kafka Producer ▸ Kafka Consumer ▸ Kafka in blibli.com ▸ Demo ▸ Conclusion
KAFKA INTRO APACHE KAFKA
APACHE KAFKA BEFORE PUBLISH / SUBSCRIBE MESSAGING MEMBER ORDER RISK
PAYMENT … ERP FINANCE …
APACHE KAFKA PUBLISH / SUBSCRIBE MESSAGING MEMBER ORDER RISK PAYMENT
… ERP FINANCE … MESSAGING SYSTEM / MESSAGE BROKER
None
APACHE KAFKA WHAT IS KAFKA ▸ Apache Kafka is a
publish/subscribe messaging system, or more recently a “distributing streaming platform” ▸ Opensource project under Apache Software Foundation.
APACHE KAFKA KAFKA HISTORY ▸ Kafka was born to solve
the data pipeline problem in LinkedIn. ▸ The development team at LinkedIn was led by Jay Kreps, now CEO of Confluent. ▸ Kafka was released as an Open Source project on Github in late 2010, and join Apache Software Foundation in 2011.
KAFKA INTERNALS APACHE KAFKA
APACHE KAFKA BROKER TOPIC A PARTITION 0 TOPIC A PARTITION
1 KAFKA BROKER
APACHE KAFKA CLUSTER TOPIC A PARTITION 0 TOPIC A PARTITION
1 (LEADER) KAFKA BROKER 1 TOPIC A PARTITION 0 TOPIC A PARTITION 1 (LEADER) KAFKA BROKER 2
APACHE KAFKA TOPICS ▸ Messages in Kafka are categorized into
Topics. ▸ The closest analogy for topic is a database table, or a folder in filesystem.
APACHE KAFKA PARTITIONS
APACHE KAFKA REPLICATION FACTOR TOPIC A PARTITION 0 TOPIC A
PARTITION 1 KAFKA BROKER 1 TOPIC A PARTITION 0 KAFKA BROKER 2 TOPIC A PARTITION 1 KAFKA BROKER 3 TOPIC A PARTITION 0 TOPIC A PARTITION 1 KAFKA BROKER 4
APACHE KAFKA CONSUMER GROUP
APACHE KAFKA CONSUMER GROUP (2)
APACHE KAFKA RETENTION POLICY ▸ A key feature of Apache
Kafka is that of retention, or the durable storage of messages for some period of time. ▸ We can set retention policy per topics by time or by size.
APACHE KAFKA MIRROR MAKER
INSTALLING KAFKA APACHE KAFKA
APACHE KAFKA JAVA ▸ Kafka using Java 8.
APACHE KAFKA ZOOKEEPER KAFKA BROKER PRODUCER CONSUMER ZOOKEEPER Metadata
APACHE KAFKA KAFKA BROKER # Minimum Broker Configuration broker.id=0 #
must unique in cluster zookeeper.connect=localhost:2181 log.dirs=data/kafka-logs
APACHE KAFKA CREATE / UPDATE TOPIC kafka-topics.sh --create --zookeeper localhost:2181
-- replication-factor 1 --partitions 1 --topic topic_name kafka-topics.sh --zookeeper localhost:2181 --alter --topic topic_name --partitions 2 --replication-factor 2
KAFKA PRODUCER APACHE KAFKA
APACHE KAFKA PRODUCER RECORD PRODUCER RECORD TOPIC PARTITION KEY VALUE
APACHE KAFKA SERIALIZER PRODUCER RECORD TOPIC PARTITION KEY VALUE SERIALIZER
APACHE KAFKA PARTITIONER PRODUCER RECORD TOPIC PARTITION KEY VALUE SERIALIZER
PARTITIONER Send to Broker
APACHE KAFKA KAFKA PRODUCER Properties props = new Properties(); props.put("bootstrap.servers",
"broker1:9092,broker2:9092"); props.put("key.serializer", “org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); producer = new KafkaProducer<String, String>(kafkaProps);
APACHE KAFKA SEND MESSAGE record = new ProducerRecord<>(topicName, key, value);
producer.send(record);
KAFKA CONSUMER APACHE KAFKA
APACHE KAFKA CONSUMER GROUP
APACHE KAFKA PARTITION REBALANCE
APACHE KAFKA PARTITION REBALANCE
APACHE KAFKA PARTITION REBALANCE
APACHE KAFKA CONSUMER RECORD & DESERIALIZER CONSUMER RECORD TOPIC PARTITION
KEY VALUE DESERIALIZER From Broker
APACHE KAFKA KAFKA CONSUMER Properties props = new Properties(); props.put("bootstrap.servers",
"broker1:9092,broker2:9092"); props.put("group.id", "GroupName"); props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); consumer = new KafkaConsumer<String, String>(props);
APACHE KAFKA GET MESSAGES consumer.subscribe(Collections.singletonList("topicName")); Long timeout = 1000L; ConsumerRecords<String,
String> records = consumer.poll(timeout);
KAFKA IN BLIBLI APACHE KAFKA
APACHE KAFKA API GATEWAY EVENT API GATEWAY MEMBER API GATEWAY
COMMON API GATEWAY … KAFKA ANALYTICS … …
APACHE KAFKA CURRENT PRODUCT (CODENAME X) X MEMBER X CART
X AUTH X WISHLIST API GATEWAY X YYYY X XXX X ORDER X PRODUCT
APACHE KAFKA NEW PRODUCT (CODENAME VERONICA) VERONICA MEMBER VERONICA CORE
VERONICA MERCHANT KAFKA VERONICA NOTIFICATION API GATEWAY
DEMO
CONCLUSION APACHE KAFKA
APACHE KAFKA WHY KAFKA? ▸ Multiple Consumer ▸ Flexible Scalability
▸ Flexible Durability ▸ High Performance ▸ Multi-Datacenter
WE ARE HIRING!
[email protected]
APACHE KAFKA
APACHE KAFKA REFERENCES ▸ http://kafka.apache.org/ ▸ https://engineering.linkedin.com/kafka/benchmarking-apache-kafka-2-million- writes-second-three-cheap-machines ▸ https://engineering.linkedin.com/kafka/running-kafka-scale