collection client for Firefox • 3-5 million crashes per day • S3, Postgres, Elastic Search, RabbitMQ • Python, .js and some C • http://socorro.readthedocs.org
bike company from Mac OS 9 DB Qube → Postgres • Was already into open source, started a user group • Started running conferences • Was a consultant for a few years • Worked on a pathological schema with ~500k+ tables, 100k schemas
there! • Main files to be concerned about: postgresql.conf # DB config (GUCs) pg_hba.conf # access control recovery.conf # replication • Postgres team distributes RPMs and .deb and we have packagers for most major distros • OS X? Postgres.app for local dev http://postgresapp.com/ Wild west for reproducible deploys. brew maybe? • Windows has good support from EnterpriseDB
FLOAT, NUMERIC, DATE, TIMESTAMPTZ and so many more! • Common: UUID, JSON, JSONB (9.4), tsearch types (full text search) • Third party: PostGIS – We guess half of all Postgres users use PostGIS
already done for you for doing useful and common things – Network address manipulation: http://www.postgresql.org/docs/9.4/static/functions -net.html – Date/time functions and operators: http://www.postgresql.org/docs/9.4/static/functions -datetime.html • Excellent support for range types means lots of awesome, easy to write queries for you.
Never SELECT * • Indexes can be made from functions • Test with ROLLBACK • EXPLAIN and EXPLAIN ANALYZE • ANALYZE is your friend • pg_stat_user_tables/indexes • UNLOGGED tables
something that seems ok, but turns out bad at 200m rows: SELECT things, stuff FROM foo JOIN bar ON foo.uuid = bar.uuid::text; • Use the type system. It will help you.
common query predicates CREATE INDEX new_index ON raw_crashes(get_text('name', json_data)); • Good for glue between feature updates or stepping stones for schema refactoring
tell you how it decided to execute a query. • Easy help with the output: http://explain.depesz.com/ • EXPLAIN ANALYZE actually runs the query (caution!) and then includes real durations
your tables are created – Creates a histogram of the contents of each column and common values (if relevant) – Used by the planner to choose efficient plans for queries • ANALYZE after bulk data loads! – Autoanalyze runs periodically on its own, but right after huge data imports stats can be out of date • Increase default_statistics_target if your EXPLAIN plans are wacky/indeterminate (default is 100, and fine for many situations)
with SELECT • Estimated row count, index scans, sequential scans, dead/live tuples and autovacuum/autoanalyze runs • Common use: find out indexes aren't being used or tables have a ton of expensive seq_scans
UNLOGGED tables CREATE UNLOGGED TABLE foo (bar int); • Fast • Not replicated, not saved if database crashes or shutdown uncleanly • Can be useful for test suites (do your own benchmark!) • Lots of other uses if you're working with ephemeral data