Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Architectures for PostgreSQL High Availability...

Architectures for PostgreSQL High Availability & Disaster Recovery

This presentation was given by Alexey Shiskin (Cloud Architect, Postgres Professional) at the "High Availability Architectures for PostgreSQL" webinar.

Postgres Professional

October 07, 2021
Tweet

More Decks by Postgres Professional

Other Decks in Programming

Transcript

  1. 2 Postgres Professional Postgres Professional, established in 2015, is a

    key contributor to PostgreSQL community At Postgres Professional we develop Postgres Pro database, a private PostgreSQL fork Postgres Professional also specializes in 24x7 technical support and other professional services (database migration, audit and performance tuning) for PostgreSQL
  2. 3 HA/DR for PostgreSQL databases When do you need to

    consider an HA/DR architecture ? HA/DR for Postgres in a nutshell (how to make it work) Introduction to commonly used HA/DR architectures Pros and cons of various HA/DR architectures for PostgreSQL HA/DR field experience
  3. 4 SLA: RTO and RPO RTO (Recovery Time Objective) -

    how long an application can be unavailable for business users - 99,99% - 52,56 minutes of downtime per year (~0,9 hrs.) - 99,9% - 525,6 minutes of downtime per year (~9 hrs.) - 99% - 5256 minutes of downtime per year (~90 hrs.) RPO (Recovery Point Objective) - how much business data can be lost
  4. 5 A sample customer’s SLA Mission critical application - RTO

    - 99,99%, RPO - 0 Business critical application - RTO - 99,9%, RPO - 0 Business operational application - RTO - 98%, RPO - 1 hr. Office operational application - RTO - 90%, RPO - 12 hrs.
  5. 6 HA/DR technologies (1/4) Manual switchover/failover - the fewer moving

    parts in the technology stack the better - monitoring and alerting systems work fine to inform the Operations team about the database issues - switchover/failover scenarios are well documented and both day and night shifts of the Operations team have adequate expertise to cope with database availability issues - RTO is not very strict (up to 5 minutes for switchover/failover tasks) - the number of databases is relatively small (up to 50)
  6. 7 HA/DR technologies (2/4) HA-cluster (don’t confuse this with Postgres

    database cluster, which is a collection of databases that is managed by a single instance of a running database server) - RTO is strict (within a minute for switchover/failover tasks) - the number of databases is big (up to 100+)
  7. 8 HA/DR technologies (3/4) Replication - logical (database transaction) -

    streaming (database block) - file system (block device) - disk/LUN (raw device) Each kind of replication can be synchronous or asynchronous
  8. 9 HA/DR technologies (4/4) Backup - online or offline -

    full or incremental - physical or logical - data files or WAL files - via database tools or via disk-array snapshots
  9. 10 HA-cluster The list of HA-clusters (sorted by popularity among

    our customers) - Patroni - https://github.com/zalando/patroni - Corosync/Pacemaker - https://github.com/ClusterLabs - Stolon - https://github.com/sorintlab/stolon - Postgres Pro Multimaster - https://github.com/postgrespro/mmts - Veritas - https://www.veritas.com/availability/infoscale
  10. 11 Patroni and Stolon (1/2) Patroni and Stolon are similar

    in functionality and architecture - depend on DCS (Distributed Configuration Store) - require Postgres streaming replication - suitable for physical servers and virtual machines (VMs) - open-source and free of charge Patroni uses external TCP-proxy to connect to master or standby(s) Stolon has built-in TCP-proxy to connect to master or standby(s)
  11. 12 Patroni and Stolon (2/2) Entry point Client DCS DCS

    DCS Consensus Entry point – TCP-proxy, Virtual IP DCS - etcd, Consul, ZooKeeper Manager - patroni-bot, stolon-sentinel Consensus - Raft-protocol Master Standby Standby Manager Manager Manager
  12. 16 Stolon architecture Sentinel Sentinel Sentinel sync api Keeper Keeper

    Keeper Proxy Proxy Proxy Client Standby Master Standby
  13. 17 Stolon (3-node HA-cluster) Standby Standby sync Proxy Proxy Proxy

    Proxy Proxy Client Client Failover Master Master Standby Master
  14. 19 Corosync/Pacemaker and Veritas Corosync/Pacemaker and Veritas are similar in

    functionality and architecture - use resource agents (disk volume, file system, IP-address, Postgres) - use Virtual IP-address (VIP) to connect to master or standby(s) - can be used with streaming replication and shared disk configuration - mostly used with physical servers - can be applied to build geo-clusters Corosync/Pacemaker is open-source and free of charge Veritas is proprietary and requires license (the only HA-cluster which integrates with disk replication)
  15. 20 Corosync/Pacemaker and Veritas architecture Virtual IP Client Master Standby

    Standby quorum sync Referee Virtual IP Client quorum Standby Master
  16. 21 Corosync/Pacemaker (3-node HA-cluster) Master sync Virtual IP Failover Client

    Client Master Standby Standby Master Virtual IP Standby quorum quorum Master
  17. 22 Corosync/Pacemaker (2+1 HA-cluster) sync Referee Virtual IP Virtual IP

    Client Failover Client quorum quorum Master Standby Master Master Referee
  18. 23 Postgres Pro Multimaster Postgres Pro Multimaster differs from other

    HA-clusters - uses logical replication - all nodes can process read-write requests (all nodes are masters) - delivers minimal possible switchover/failover time (single digit seconds) - open-source, but requires license to run in a production environment
  19. 24 Postgres Pro Multimaster architecture Logical replication 3-phase commit: 1.

    PREPARE 2. PRECOMMIT 3. COMMIT Master Master Master
  20. 27 Streaming replication Replication is used for disaster recovery purposes

    (main site lost) If the required RPO is strictly zero, synchronous replication has to be used between main and DR sites, otherwise asynchronous replication is enough Postgres streaming replication is the most popular solution among the customers, it’s included both in PostgreSQL and Postgres Pro database Streaming replication is integrated with HA-clusters (Patroni, Corosync/Pacemaker, Stolon) and uses master/standby(s) configuration, where master is available for read/write requests, while standby(s) can only be used for read-only requests
  21. 28 Logical replication Postgres logical replication is slower than streaming

    replication by design Logical replication is more flexible - replication of only some database objects instead of whole database - replication between two databases of different major versions - bi-directional replication between two databases Postgres Pro Multimaster uses logical replication to set up an HA-cluster where all nodes can handle read/write requests
  22. 29 Disk/LUN replication Replication is on the disk-array level (usually

    requires a license) Disk-arrays have to support this replication on both main and DR sites (have to be of the same type) Delivers maximum performance for write intensive load profiles Veritas HA-cluster is integrated with disk/LUN replication
  23. 30 Backup (1/2) The most popular tool for database backup

    among our customers is ‘pg_probackup’ - https://github.com/postgrespro/pg_probackup - supports both full and incremental backup/restore - supports point-in-time-recovery (PITR) - provides backup catalog - supports backup compression - backup validation without actual data restore - parallelism of backup/restore tasks - and many more
  24. 31 Backup (2/2) The other popular way for backup/restore employs

    disk-array snapshots, which allows to do backup/restore of the database very fast (seconds to single digit minutes) regardless of its size Built-in PostgreSQL - ‘pg_basebackup’ - full backups only (no incremental backups) - no parallelism of backup/restore tasks Built-in PostgreSQL - ‘pg_dump’ and ‘pg_dumpall’ - logical backup (no PITR)
  25. 32 Typical HA/DR architectures use HA-clusters, replication and backup: -

    local HA-cluster (all nodes within one site) - stretched HA-cluster (between two or three sites, up to 30 km to each other) 3-node HA-cluster sync 2+1 HA-cluster Client Client Master Standby Standby Referee Master Standby Client
  26. 33 Geo HA-cluster (50+ km between regions) (1/2) async Main

    region DR-region Internet Standby async Master Standby Standby Client
  27. 34 Geo HA-cluster (50+ km between regions) (2/2) async Main

    region DR-region Internet Client Cascade Standby Master Standby Standby Referee Leaf standby Leaf standby DNS dnsupdate async
  28. 35 Postgres Pro enhancements (1/2) Support for relaxed synchronous replication

    restrictions, which allows the master server to continue running while some of the standbys are temporarily unavailable: https://postgrespro.com/docs/enterprise/13/runtime-config- replication#GUC-SYNCHRONOUS-STANDBY-GAP Automatic database block repair via streaming replication from standby in case of data corruption: https://postgrespro.com/docs/enterprise/13/warm-standby#REPAIR-PAGE- FROM-STANDBY
  29. 36 Postgres Pro enhancements (2/2) Corrupted WAL data restore from

    in-memory WAL buffers: https://postgrespro.com/docs/enterprise/13/wal-restoration Support for database minor version upgrades without a database instance restart: https://postgrespro.com/docs/enterprise/13/release-proee-13-2-1 Compressed file system (CFS) offers database compression at the database block level: https://postgrespro.com/docs/enterprise/13/cfs
  30. 37 Documentation links PostgreSQL ‘High Availability, Load Balancing, and Replication’

    documentation: https://www.postgresql.org/docs/14/high-availability.html PostgreSQL ‘Backup and Restore’ documentation: https://www.postgresql.org/docs/14/backup.html Postgres Pro Enterprise documentation: https://postgrespro.com/docs/enterprise/13/index