Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Architecting petabyte-scale analytics by scalin...

Architecting petabyte-scale analytics by scaling out Postgres on Azure with Citus | PASS Hybrid Virtual Group Nov 2020 | Alicja Kucharcyzk

This is the story about a Postgres PaaS service powering a 1.5 petabyte analytics application with 2816 cores and 18.7 TB of memory in the Citus cluster at Microsoft. We will discuss its architecture, features, and use case:

The Windows team measures the quality of new software builds by scrutinizing 20,000 diagnostic metrics based on data flowing in from 800 million Windows devices. At the same time, the team evaluates feedback from Microsoft engineers who are using pre-release versions of Windows updates. At Microsoft, the Windows diagnostic metrics are displayed on a real-time analytics dashboard called “Release Quality View” (RQV), which helps the internal “ship-room” team assess the quality of the customer experience before each new Windows update is released. Given the importance of Windows for Microsoft’s customers, the RQV analytics dashboard is a critical tool for Windows engineers, program managers, and execs.

Azure Database for PostgreSQL

November 24, 2020
Tweet

More Decks by Azure Database for PostgreSQL

Other Decks in Technology

Transcript

  1. Architecting petabyte-scale system by scaling out Postgres on Azure with

    Citus Alicja Kucharczyk @StiepanTrofimo EMEA Global Blackbelt OSS Data Tech Specialist PASS, November 2020
  2. Why We Postgres • Open Source • Constraints • Extensions

    • PostGIS • Citus • B-tree, GIN, BRIN, & GiST • Available as a database service • Decades of robustness • Millions of happy users • Foreign data wrappers • Window functions • CTEs • ACID • Full text search • JSONB • Rich datatypes • Community • Rollups
  3. How do you know if the next update to your

    software is ready for hundreds of millions of customers?
  4. The short story Min Wei, Principal Engineer at Microsoft discovered

    the open source Citus extension to Postgres by listening to a recorded conference talk on his drive home Impressed with the early results, he transitioned the project from a proof of concept into an official project. A few months later Microsoft had acquired Citus Data.
  5. “Distributed PostgreSQL is a game changer. We can support more

    than 6M queries every day, on 2 PB of data. With Citus, most of our queries respond in less than 0.2 seconds.”
  6. Architecting petabyte-scale analytics by scaling out Postgres on Azure with

    the Citus extension aka.ms/blog-petabyte-scale-analytics
  7. Hyperscale (Citus) OPEN SOURCE EXTENSION PURE POSTGRES, NOT A FORK

    TURNS POSTGRES INTO DISTRIBUTED, SHARDED DATABASE ALL THE BENEFITS OF POSTGRES, WITHOUT WORRY ABOUT SCALE
  8. MICROSOF T CONF IDENTIAL – INTERNAL ONLY Azure Database for

    PostgreSQL is available in two deployment options Single Server Fully-managed, single-node PostgreSQL Example use cases • Apps with JSON, geospatial support, or full-text search • Transactional and operational analytics workloads • Cloud-native apps built with modern frameworks Hyperscale (Citus) High-performance Postgres for scale out Example use cases • Scaling PostgreSQL multi-tenant, SaaS apps • Real-time operational analytics • Building high throughput transactional apps Enterprise-ready, fully managed community PostgreSQL with built-in HA and multi-layered security We’re talking about Hyperscale (Citus) today
  9. Shard your Postgres database across multiple nodes to give your

    application more memory, compute, and disk storage Easily add worker nodes to achieve horizontal scale Scale up to 100s of nodes Scale horizontally across hundreds of cores with Hyperscale (Citus) Select from table Coordinator Table metadata Select from table_1001 Select from table_1003 Select from table_1002 Select from table_1004 Data node N Data node 2 Data node 1 Table_1001 Table_1003 Table_1002 Table_1004 Each node PostgreSQL with Citus installed 1 shard = 1 PostgreSQL table Sharding data across multiple nodes
  10. Terminology Coordinator – Stores Metadata. Node which application connects to.

    Worker / Data nodes – Nodes which store data in form of shards. Sharding – Process of dividing data among nodes. Shards – A partition of the data containing a subset of rows.
  11. Co-located join • APPLICATION SELECT FROM WHERE AND count(*) ads

    JOIN campaigns ON ads.company_id = campaigns.company_id ads.designer_name = ‘Isaac’ campaigns.company_id = ‘Elly Co’ ; METADATA COORDINATOR NODE WORKER NODES W1 W2 W3 … Wn SELECT … FROM ads_1001, campaigns_2001 … It’s logical to place shards containing related rows of related tables together on the same nodes Join queries between related rows can reduce the amount of data sent over the network
  12. Effectively manage data scale out Shard rebalancer redistributes shards across

    old and new worker nodes for balanced data scale out without any downtime. Shard rebalancer will recommend rebalance when shards can be placed more evenly For more control, use tenant isolation to easily allocate dedicated to specific tenants with greater needs
  13. APPLICATION BEGIN; UPDATE SET WHERE UPDATE SET WHERE COMMIT; campaigns

    feedback = ‘relevance’ company_type = ‘platinum’ ; ads feedback = ‘relevance’ company_type = ‘platinum’ ; METADATA COORDINATOR NODE W1 W2 W3 … Wn BEGIN … assign_Scaled-out_ transaction_id … UPDATE campaigns_2009 … COMMIT PREPARED … BEGIN … assign_Scaled-out_ transaction_id … UPDATE campaigns_2001 … COMMIT PREPARED … BEGIN … assign_Scaled-out_ transaction_id … UPDATE campaigns_2017 … COMMIT PREPARED … Scaled-out transaction Hyperscale (Citus) leverages built-in 2PC protocol to prepare transactions via a coordinator node Once worker nodes commit to transactions, release their locks, and send acknowledgements, the coordinator node completes the scaled-out transaction WORKER NODES
  14. Distributed Tables Definition: • Tables that are sharded. Classification: •

    Large tables (>10GB) – shard on same key (may require addition of shard key) • All tables are be co-located • Enables localized and fast joins on workers • Ex: transactions, events etc SELECT create_distributed_table(table_name, column_name);
  15. Definition: • Replicated to all the nodes (extra latency) Classification:

    • Small tables < 10GB • Efficient joins with distributed tables • Cannot have sharding dimension • Ex: countries, categories SELECT create_reference_table(table_name); Reference Tables
  16. • Plain Postgres tables on the coordinator node. • Admin

    Tables that don’t interact with main tables • Separate micro-service that doesn’t need sharding Local Tables
  17. MICROSOF T CONF IDENTIAL – INTERNAL ONLY Hyperscale (Citus): Customer

    view Application PostgreSQL client Coordinator w/ public IP Worker node 0, no public IP Worker node 1, no public IP Server group
  18. MICROSOF T CONF IDENTIAL – INTERNAL ONLY Hyperscale (Citus): High

    availability Application PostgreSQL client Coordinator w/ public IP Worker node 0, no public IP Worker node 1, no public IP AZ[0] Coordinator’s standby Worker node 0’s standby Worker node 1’s standby AZ[1] Postgres sync replication Postgres sync replication Postgres sync replication
  19. Our Azure Postgres service page—and our blog! Azure Database for

    PostgreSQL https://aka.ms/azure-postgres Azure Postgres Blog https://aka.ms/azure-postgres-blog
  20. Migrations https://aka.ms/postgres-migration-tutorial Wealth of documentation resources, too Azure Postgres Quickstart

    Docs https://aka.ms/azure-postgres-quickstart Azure Database for PostgreSQL https://aka.ms/azure-postgres Azure Postgres Blog https://aka.ms/azure-postgres-blog
  21. Migrations https://aka.ms/postgres-migration-tutorial [email protected] Citus open source packages on GitHub—also, Email

    https://aka.ms/citus Azure Postgres Quickstart Docs https://aka.ms/azure-postgres-quickstart Azure Database for PostgreSQL https://aka.ms/azure-postgres Azure Postgres Blog https://aka.ms/azure-postgres-blog