Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Migrating a data stack from AWS to Azure (via R...

Migrating a data stack from AWS to Azure (via Raspberry Pi)

soobrosa

March 04, 2022
Tweet

More Decks by soobrosa

Other Decks in Technology

Transcript

  1. MIGRATING A DATA STACK FROM AWS TO AZURE VIA RASPBERRY

    PI @SOOBROSA @WUNDERLIST @MICROSOFT
  2. WUNDERLIST PRODUCTIVITY APP ON IPHONE, IPAD, MAC, ANDROID, WINDOWS, KINDLE

    FIRE AND THE WEB 21+ MILLION USERS, 6 YEARS, HEADCOUNT OF 67 FROM MONOLITHIC RAILS TO POLYGLOT MICROSERVICES SCALA, CLOJURE, GO ON AWS
  3. DATA MOSTLY IN POSTGRESQL > Hosted on AWS > ~33

    databases > ~120 concurrent connections/database > Usually 2-3 tables per database > tasks table contains 1 billion records.
  4. DATA SIZING > Collect every event from clients 125M/day >

    Parse & filter compressed logs' 375GB/day > Mirror every production database 35GB inc./day > Load external sources (e.g.: app store, payments) > Calculate KPIs, aggregates, business logic - 200+ queries > Self service data for everybody
  5. WHY MAKE? > blame Jeff Hammerbacher > it's a machine-readable

    documentation > supports dependencies, retries > easy to test, even locally all target > executes multiple targets in parallel > coding is necessary to modify -> changelog in Git
  6. # Dumps users table from production. public.users.csv.gz: script/users/dump_users_rds_table.sh | gzip

    -c8 > $@ # Upload the compressed dump into S3. users_s3_url:=s3://wunderlytics/.../users-delta-$(TODAY)-$(TMP_TOKEN).csv.gz public.users.csv.gz.uploaded: public.users.csv.gz script/move_to_s3.sh $< $(users_s3_url) > $@ # Load users into Redshift. public.users: | public.users.csv.gz.uploaded night-shift/lib/run_sql_template.rb \ --dialect redshift \ --aws_creds "`lib/aws_cred.py`" \ --config config/redshift_fast_queries_pg_credentials.sh \ --s3file "`cat $(firstword $|)`" \ script/users/schema.sql.erb \ script/users/replace_users_table.sql.erb touch $@
  7. NIGHT-SHIFT AS ETL > cron for scheduling > make for

    dependencies, partial results, retries > glue with bash > inject variables and logic into SQL with Ruby's ERB > runs in a tracking shell, so timing, output and errors are logged > monitoring interface in Flask > locally testable > Open source
  8. # Create a temporary table CREATE TABLE #notes_staging ( <%=

    specs.map {|col, type| "#{col} #{type}"}.join(", ") %> ) SORTKEY(id); # Load data into the temporary table from S3 COPY #notes_staging ( <%= columns.join "," %> ) FROM '<%= s3file %>' WITH CREDENTIALS <%= aws_creds %> GZIP TRUNCATECOLUMNS DELIMITER '\001' ESCAPE REMOVEQUOTES; # Updating the changed values UPDATE notes SET <%= updates.join "," %> FROM #notes_staging u WHERE ( u.deleted_at IS NOT NULL OR u.updated_at > notes.updated_at ) AND notes.id = u.id; # Inserting the new rows INSERT INTO notes ( <%= columns.join "," %> ) ( SELECT <%= columns.join "," %> FROM #notes_staging u WHERE u.id NOT IN (SELECT id FROM notes) );
  9. ANALYTICS IN REDSHIFT 10 TB COMPRESSED AGGREGATION Two clusters: >

    Hot: 22 x dc1.large (2 vCPU, 15GB RAM, 160GB SSD) > Cold: 6 x ds2.xlarge (4 vCPU, 31GB RAM, 2TB HDD)
  10. cold storage (Redshift) hot storage (Redshift) production database(s) external sources

    S3 S3 microservice applications Rsyslog Noxy EMR (Hadoop) logging clients (phone, tablet, etc.) email Tracking SNS SQS SQS dumper tracking Postamt Chart.io AWS + DWH riporting data-flow S3 2015-12
  11. TRANSLATED TO BUSINESS > Total Cost of Ownership is dead

    serious > can't do 24/7 support on data > forensic analysis is not our scope > remove if you can
  12. GOALS > Simplify > Abstract away AWS specific parts >

    Remove unnecessary complications like Hadoop > Add Azure support for the components > Refactor and make the code reusable
  13. cold storage (Redshift) hot storage (Redshift) production database(s) external sources

    S3 S3 microservice applications Rsyslog Noxy EMR (Hadoop) logging clients (phone, tablet, etc.) email Tracking SNS SQS SQS dumper tracking Postamt Chart.io AWS + DWH riporting data-flow S3 2015-12
  14. cold storage (Redshift) hot storage (Redshift) production database(s) external sources

    S3 S3 Rsyslog Noxy Jr. Beaver logging clients (phone, tablet, etc.) email Tracking SNS SQS SQS dumper tracking Postamt Chart.io AWS + DWH riporting data-flow S3 2016-01
  15. EMR TO JR. BEAVER > Detects the format of every

    log line > Log cruncher that standardizes microservices' logs > Classifies events' names based on API's URL > Filters the analytically interesting rows > Map/reduce functionality. > Hadoop+Scala to make+pypy
  16. JR. BEAVER > Configurable with YAML files > Written in

    Pypy instead of Go > Using night-shift's make for parallelism > "Big RAM kills Big data" > No Hadoop+Scala headache anymore > Gives monitoring
  17. cold storage (Redshift) hot storage (Redshift) production database(s) external sources

    S3 S3 Rsyslog Noxy Jr. Beaver logging clients (phone, tablet, etc.) email Tracking SNS SQS SQS dumper tracking Postamt Chart.io AWS + DWH riporting data-flow S3 2016-01
  18. cold storage (Redshift) hot storage (Redshift) production database(s) external sources

    S3 S3 Rsyslog Noxy Jr. Beaver logging clients (phone, tablet, etc.) email Hamustro tracking Chart.io AWS + DWH riporting data-flow S3 2016-02
  19. HOMEBREW TRACKING TO HAMUSTRO > Tracks client device events >

    Saves to cloud targets > Handles sessions and strict order of events > Rewritten from NodeJS to Go > Uses S3 directly instead of SNS/SQS (inspired by Marcio Castilho)
  20. HAMUSTRO > Supports Amazon SNS/SQS, Azure Queue Storage > Supports

    Amazon S3, Azure Blob Storage > Tracks up to 6M events/min on a single 4vCPU server > Using Protobuf/JSON for events sending > Written in Go > Open source
  21. S3 VS. SNS IN A SINGLE 4VCPU COMPUTER Hamustro's S3

    dialect (~6M/min): |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| Hamustro's SNS dialect (~60k/min): ||||||
  22. MAPPING AND BENCHMARKING Azure Blob Storage Azure SQL Data Warehouse

    Ubuntu 14.04 Amazon S3 Amazon Redshift Ubuntu 14.04 Amazon SNS/SQS Chartio Chartio Hamustro Hamustro Power BI (under evaluation) Tracking
  23. TOOLS IN UNIX FOR PRODUCTION > azrcmd: CLI to download

    and upload files to Azure Blob Storage. Provides s3cmd like functionality > cheetah: CLI for MSSQL that works in OSX and Linux and also supports Azure SQL Data Warehouse. Similar to psql and superior to sql-cli and Microsoft's sqlcmd
  24. cold storage (Redshift) hot storage (Redshift) production database(s) external sources

    S3 S3 Rsyslog Noxy Jr. Beaver logging clients (phone, tablet, etc.) email Hamustro tracking Chart.io AWS + DWH riporting data-flow S3 2016-02
  25. SQLDWH production database(s) external sources ABS Weasel Noxy Jr. Beaver

    clients (phone, tablet, etc.) Hamustro Chart.io Azure + DWH riporting data-flow logging tracking ABS 2016-04
  26. ADAPT SQL APPROACH > Different loading strategies > Scale up

    while the data pipeline is running > Set up the right resource groups for every user > Define distributions and use partitions > Use full featured SQL > Find the perfect balance between concurrency and speed
  27. HYBRID, CLOUD AGNOSTIC DATA STACK * POST-CLOUD DATA INFRASTRUCTURE AKA

    A DOZEN RPI POWERTAPED TOGETHER * REDNECK DATA AS OPPOSING DATA SCIENCE