Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Apache Kafka
Search
Eko Kurniawan Khannedy
August 30, 2017
Technology
1
4.4k
Apache Kafka
JVM Meetup #5 - Apache Kafka at Blibli.com
Eko Kurniawan Khannedy
August 30, 2017
Tweet
Share
More Decks by Eko Kurniawan Khannedy
See All by Eko Kurniawan Khannedy
Monolith to Event-Driven Microservices
khannedy
1
260
Refactoring
khannedy
0
330
Multi-Datacenter Kafka at Blibli.com
khannedy
2
1.5k
QA Tools - Research and Development
khannedy
0
280
Reactive Puzzle
khannedy
0
200
Event-Driven Architecture
khannedy
1
1.9k
Resilience Engineering with Hystrix and Spring
khannedy
1
560
Mocking for Unit Test using Mockito
khannedy
1
340
Centralized Configuration using Consul and Spring Cloud
khannedy
2
700
Other Decks in Technology
See All in Technology
データアナリストからアナリティクスエンジニアになった話
hiyokko_data
0
280
おやつは300円まで!の最適化を模索してみた
techtekt
PRO
0
260
TypeScript入門
recruitengineers
PRO
35
11k
生成AI時代のデータ基盤設計〜ペースレイヤリングで実現する高速開発と持続性〜 / Levtech Meetup_Session_2
sansan_randd
1
110
AWSで推進するデータマネジメント
kawanago
0
860
ヘブンバーンズレッドのレンダリングパイプライン刷新
gree_tech
PRO
0
440
DDD集約とサービスコンテキスト境界との関係性
pandayumi
2
230
【 LLMエンジニアがヒューマノイド開発に挑んでみた 】 - 第104回 Machine Learning 15minutes! Hybrid
soneo1127
0
250
【Grafana Meetup Japan #6】Grafanaをリバプロ配下で動かすときにやること ~ Grafana Liveってなんだ ~
yoshitake945
0
220
allow_retry と Arel.sql / allow_retry and Arel.sql
euglena1215
1
150
衝突して強くなる! BLUE GIANTと アジャイルチームの共通点とは ― いきいきと活気に満ちたグルーヴあるチームを作るコツ ― / BLUE GIANT and Agile Teams
naitosatoshi
0
290
AWS環境のリソース調査を Claude Code で効率化 / aws investigate with cc devio2025
masahirokawahara
2
1.1k
Featured
See All Featured
BBQ
matthewcrist
89
9.8k
個人開発の失敗を避けるイケてる考え方 / tips for indie hackers
panda_program
111
20k
The Web Performance Landscape in 2024 [PerfNow 2024]
tammyeverts
9
800
ReactJS: Keep Simple. Everything can be a component!
pedronauck
667
120k
Save Time (by Creating Custom Rails Generators)
garrettdimon
PRO
32
1.5k
Building Better People: How to give real-time feedback that sticks.
wjessup
368
19k
A designer walks into a library…
pauljervisheath
207
24k
Product Roadmaps are Hard
iamctodd
PRO
54
11k
Into the Great Unknown - MozCon
thekraken
40
2k
Intergalactic Javascript Robots from Outer Space
tanoku
272
27k
Code Reviewing Like a Champion
maltzj
525
40k
Design and Strategy: How to Deal with People Who Don’t "Get" Design
morganepeng
131
19k
Transcript
APACHE KAFKA EKO KURNIAWAN KHANNEDY
APACHE KAFKA EKO KURNIAWAN KHANNEDY ▸ Principal Software Development Engineer
at Blibli.com ▸ Part of RnD Team at Blibli.com ▸
[email protected]
APACHE KAFKA AGENDA ▸ Kafka Intro ▸ Kafka Internals ▸
Installing Kafka ▸ Kafka Producer ▸ Kafka Consumer ▸ Kafka in blibli.com ▸ Demo ▸ Conclusion
KAFKA INTRO APACHE KAFKA
APACHE KAFKA BEFORE PUBLISH / SUBSCRIBE MESSAGING MEMBER ORDER RISK
PAYMENT … ERP FINANCE …
APACHE KAFKA PUBLISH / SUBSCRIBE MESSAGING MEMBER ORDER RISK PAYMENT
… ERP FINANCE … MESSAGING SYSTEM / MESSAGE BROKER
None
APACHE KAFKA WHAT IS KAFKA ▸ Apache Kafka is a
publish/subscribe messaging system, or more recently a “distributing streaming platform” ▸ Opensource project under Apache Software Foundation.
APACHE KAFKA KAFKA HISTORY ▸ Kafka was born to solve
the data pipeline problem in LinkedIn. ▸ The development team at LinkedIn was led by Jay Kreps, now CEO of Confluent. ▸ Kafka was released as an Open Source project on Github in late 2010, and join Apache Software Foundation in 2011.
KAFKA INTERNALS APACHE KAFKA
APACHE KAFKA BROKER TOPIC A PARTITION 0 TOPIC A PARTITION
1 KAFKA BROKER
APACHE KAFKA CLUSTER TOPIC A PARTITION 0 TOPIC A PARTITION
1 (LEADER) KAFKA BROKER 1 TOPIC A PARTITION 0 TOPIC A PARTITION 1 (LEADER) KAFKA BROKER 2
APACHE KAFKA TOPICS ▸ Messages in Kafka are categorized into
Topics. ▸ The closest analogy for topic is a database table, or a folder in filesystem.
APACHE KAFKA PARTITIONS
APACHE KAFKA REPLICATION FACTOR TOPIC A PARTITION 0 TOPIC A
PARTITION 1 KAFKA BROKER 1 TOPIC A PARTITION 0 KAFKA BROKER 2 TOPIC A PARTITION 1 KAFKA BROKER 3 TOPIC A PARTITION 0 TOPIC A PARTITION 1 KAFKA BROKER 4
APACHE KAFKA CONSUMER GROUP
APACHE KAFKA CONSUMER GROUP (2)
APACHE KAFKA RETENTION POLICY ▸ A key feature of Apache
Kafka is that of retention, or the durable storage of messages for some period of time. ▸ We can set retention policy per topics by time or by size.
APACHE KAFKA MIRROR MAKER
INSTALLING KAFKA APACHE KAFKA
APACHE KAFKA JAVA ▸ Kafka using Java 8.
APACHE KAFKA ZOOKEEPER KAFKA BROKER PRODUCER CONSUMER ZOOKEEPER Metadata
APACHE KAFKA KAFKA BROKER # Minimum Broker Configuration broker.id=0 #
must unique in cluster zookeeper.connect=localhost:2181 log.dirs=data/kafka-logs
APACHE KAFKA CREATE / UPDATE TOPIC kafka-topics.sh --create --zookeeper localhost:2181
-- replication-factor 1 --partitions 1 --topic topic_name kafka-topics.sh --zookeeper localhost:2181 --alter --topic topic_name --partitions 2 --replication-factor 2
KAFKA PRODUCER APACHE KAFKA
APACHE KAFKA PRODUCER RECORD PRODUCER RECORD TOPIC PARTITION KEY VALUE
APACHE KAFKA SERIALIZER PRODUCER RECORD TOPIC PARTITION KEY VALUE SERIALIZER
APACHE KAFKA PARTITIONER PRODUCER RECORD TOPIC PARTITION KEY VALUE SERIALIZER
PARTITIONER Send to Broker
APACHE KAFKA KAFKA PRODUCER Properties props = new Properties(); props.put("bootstrap.servers",
"broker1:9092,broker2:9092"); props.put("key.serializer", “org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); producer = new KafkaProducer<String, String>(kafkaProps);
APACHE KAFKA SEND MESSAGE record = new ProducerRecord<>(topicName, key, value);
producer.send(record);
KAFKA CONSUMER APACHE KAFKA
APACHE KAFKA CONSUMER GROUP
APACHE KAFKA PARTITION REBALANCE
APACHE KAFKA PARTITION REBALANCE
APACHE KAFKA PARTITION REBALANCE
APACHE KAFKA CONSUMER RECORD & DESERIALIZER CONSUMER RECORD TOPIC PARTITION
KEY VALUE DESERIALIZER From Broker
APACHE KAFKA KAFKA CONSUMER Properties props = new Properties(); props.put("bootstrap.servers",
"broker1:9092,broker2:9092"); props.put("group.id", "GroupName"); props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); consumer = new KafkaConsumer<String, String>(props);
APACHE KAFKA GET MESSAGES consumer.subscribe(Collections.singletonList("topicName")); Long timeout = 1000L; ConsumerRecords<String,
String> records = consumer.poll(timeout);
KAFKA IN BLIBLI APACHE KAFKA
APACHE KAFKA API GATEWAY EVENT API GATEWAY MEMBER API GATEWAY
COMMON API GATEWAY … KAFKA ANALYTICS … …
APACHE KAFKA CURRENT PRODUCT (CODENAME X) X MEMBER X CART
X AUTH X WISHLIST API GATEWAY X YYYY X XXX X ORDER X PRODUCT
APACHE KAFKA NEW PRODUCT (CODENAME VERONICA) VERONICA MEMBER VERONICA CORE
VERONICA MERCHANT KAFKA VERONICA NOTIFICATION API GATEWAY
DEMO
CONCLUSION APACHE KAFKA
APACHE KAFKA WHY KAFKA? ▸ Multiple Consumer ▸ Flexible Scalability
▸ Flexible Durability ▸ High Performance ▸ Multi-Datacenter
WE ARE HIRING!
[email protected]
APACHE KAFKA
APACHE KAFKA REFERENCES ▸ http://kafka.apache.org/ ▸ https://engineering.linkedin.com/kafka/benchmarking-apache-kafka-2-million- writes-second-three-cheap-machines ▸ https://engineering.linkedin.com/kafka/running-kafka-scale