Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Productionizing Big Data - stories from the tre...
Search
Sponsored
·
Ship Features Fearlessly
Turn features on and off without deploys. Used by thousands of Ruby developers.
→
Roksolana
September 14, 2023
Technology
0
83
Productionizing Big Data - stories from the trenches
Presented at ScalaDays 2023 (Madrid, Spain)
Roksolana
September 14, 2023
Tweet
Share
More Decks by Roksolana
See All by Roksolana
Pain of engineering management
roksolanad
1
100
Alice and the return to the world of pods and higher-order functions
roksolanad
0
190
Modern data pipelines in AdTech - life in the trenches
roksolanad
1
300
Alice and travelling back in time
roksolanad
0
180
Big Data at AdTech
roksolanad
0
360
Alice and the Mad Hatter: Predict or not to predict
roksolanad
0
210
Alice in the world of machine learning
roksolanad
0
120
Alice and the lost pod: practical guide to Kubernetes in Scala
roksolanad
1
350
Scala meets Kubernetes
roksolanad
0
520
Other Decks in Technology
See All in Technology
Interop Tokyo 2025 ShowNet Team Memberで学んだSRv6を基礎から丁寧に
miyukichi_ospf
0
170
20260222ねこIoTLT ねこIoTLTをふりかえる
poropinai1966
0
190
GitHub Copilot CLI 現状確認会議(2026年2月のすがた)
torumakabe
4
580
作るべきものと向き合う - ecspresso 8年間の開発史から学ぶ技術選定 / 技術選定con findy 2026
fujiwara3
1
180
技術選定 したい人 したくない人
shirayanagiryuji
0
340
Getting started with Google Antigravity
meteatamel
0
330
【5分でわかる】セーフィー エンジニア向け会社紹介
safie_recruit
0
43k
opsmethod第1回_アラート調査の自動化にむけて
yamatook
0
270
Oracle Database@Google Cloud:サービス概要のご紹介
oracle4engineer
PRO
3
1.1k
フルスタックGoでスコア改ざんを防いだ話
ponyo877
0
510
論文検索を日本語でできるアプリを作ってみた
sailen2
0
100
Claude Codeで実践するスペック駆動開発入門 / sdd-with-claude_code
yoshidashingo
3
4.3k
Featured
See All Featured
Google's AI Overviews - The New Search
badams
0
920
A Tale of Four Properties
chriscoyier
162
24k
The #1 spot is gone: here's how to win anyway
tamaranovitovic
2
960
Noah Learner - AI + Me: how we built a GSC Bulk Export data pipeline
techseoconnect
PRO
0
120
Accessibility Awareness
sabderemane
0
68
How GitHub (no longer) Works
holman
316
140k
State of Search Keynote: SEO is Dead Long Live SEO
ryanjones
0
140
Mobile First: as difficult as doing things right
swwweet
225
10k
Save Time (by Creating Custom Rails Generators)
garrettdimon
PRO
32
2.3k
Optimizing for Happiness
mojombo
379
71k
Groundhog Day: Seeking Process in Gaming for Health
codingconduct
0
110
Visualization
eitanlees
150
17k
Transcript
Productionizing big data - stories from the trenches
Roksolana Diachuk •Engineering manager at Captify •Women Who Code Kyiv
Data Engineering Lead •Speaker
AdTech methodologies deliver the right content at the right time
to the right consumer AdTech
None
You have your pipelines in production What’s next?
Types of issues • Low performance • Human errors •
Data source errors
Story #1. Unlucky query
Problem Drop 13 months of user profiles
Reporting
Problem 13 months hour=22042001
Loading mechanism loader.ImpalaLoaderConfig.periodToLoad: “P5D” loader.ImpalaLoaderConfig.periodToLoad: “P13M” val minTime = currentDay.minus(config.feedPeriod)
listFiles.filter(file => file.eventDateTime isAfter minTime)
Solution loader.ImpalaLoaderConfig.periodToLoad: “P5D” loader.ImpalaLoaderConfig.periodToLoad: “P1M” loader.ImpalaLoaderConfig.periodToLoad: “P13M” …
Story #2. Missing data
Data ingestion Data from Partner X Data costs attribution Extractor
Problem XX Advertiser ID, Language, XX Device Type, …, XX
Media Cost (USD) X Advertiser ID, Language, X Device Type, …, X Media Cost (USD)
Solution • Rename old columns • Reload data for the
week
Solution val colRegex: Regex = “””X (.+)“””.r val oldNewColumnsMapping =
df.schema.collect { case oldColdName@colRegex(pattern) => (oldColName.name, (“XX “ + pattern)) } oldNewColumnsMapping.foldLeft(df) { case (data, (oldName, newName)) => data.withColumnRenamed(oldName, newName) }
XX Advertiser ID, Language, XX Device Type, …, XX Media
Cost (USD) Solution
Story #3. Divide and conquer
Problem processing_time part-*.parquet filtering aggregations created part-*.parquet
• Slow processing • Large parquet files • Failing job
that consumes lots of resources Problem
• Write new partitioned state • Run downstream jobs with
smaller states • Generate seed partition column - xxhash64(fullUrl, domain) Solution
processing_time part-*.parquet created bucket=0 part-*.parquet part-*.parquet … bucket=9 part-*.parquet part-*.parquet
processing_time part-*.parquet Solution
Story #4. Catch the evolution train
Data organisation evolution
Problem • Missing columns from the source • Impala to
Databricks migration speed • Dependency with another team • Unhappy users
Log-level data Mapper Ingestor Transformer Data costs calculator Data costs
attribution
Data costs attribution Data costs attribution Data extractor Impala loader
Data costs attribution Data extractor Impala loader Data costs attribution
Solution XX Advertiser ID, Language, XX Device Type, …, XX
Partner Currency, XX CPM Fee (USD) XX Advertiser ID, Language, XX Device Type, …, XX Media Cost (USD) 26 columns 82 columns
Solution Data extractor New ingestion job
//final step is writing the data df.write .partitionBy(“event_date”, “event_hour”) .mode(SaveMode.Overwrite)
.parquet(dstPath) Solution
Why this solution doesn’t work data_feed clicks.csv.gz views.csv.gz activity.csv.gz event_date
clicks1.parquet clicks2.parquet
Impressions Clicks Conversions Attribution data source
Solution impressions clicks conversions clicks.csv.gz views.csv.gz activity.csv.gz
Story #5. Cleanup time
Corrupted data Data from Partner X Ingestor
Corrupted data Data from Partner X Ingestor IllegalArgumentException: Can't convert
value to BinaryType data type
Solution • Adjust pipeline • Reload data for 3 days
on S3 • Relaunch Databricks autoloader
Current solution impressions videoevents conversions impressions conversions Clicks clicks videoevents
Current solution impressions conversions clicks videoevents
Better solution impressions videoevents conversions impressions conversions clicks clicks videoevents
Conclusions
2. Observability is the key 4. Plan major changes carefully
1. Set up clear expectations with stakeholders Prevention mechanisms 3. Distribute data transformation load
2. Errors can be prevented 4. Data evolution is hard
1. Data setup is always changing Conclusions 3. There are multiple approaches with different tools
None
dead_ fl owers22 roksolana-d roksolanadiachuk roksolanad My contact info