Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Lessons Learned from Five Years of Multi-Cloud ...

Lessons Learned from Five Years of Multi-Cloud at PagerDuty

PagerDuty has been running a multi-cloud infrastructure over the past 5 years. In that time, we have tested multiple providers, learned about fun networking routes, saw what traffic filtering looks like, and many other horrors.

In this talk, I will be going over the decisions and events that led up PagerDuty's multi-cloud environment and how we managed it. I will go through the benefits and problems with our setup and the assumptions that we made that turned out to be completely wrong. By the end of this talk, you will be able to better answer the question of whether a multi-cloud setup is the right thing for your team or company.

Arup Chakrabarti

March 28, 2018
Tweet

More Decks by Arup Chakrabarti

Other Decks in Technology

Transcript

  1. Arup Chakrabarti Director of Engineering, PagerDuty Five Years of Multi-Cloud

    at PagerDuty A ROMANTIC AND COMPLICATED LOVE STORY SRECON AMERICAS 2018 @arupchak
  2. @arupchak PagerDuty Early 2012 • Cloud Native • Used Failover

    for High Availability • MySQL Master/Slave Topology based on DRBD • Stateless Rails app behind Load Balancers • AWS us-east-1 and failover site in New Jersey
  3. @arupchak PagerDuty Late 2012 • Started teasing apart PagerDuty into

    separate Services • Starting using Quorum based systems • Cassandra and Zookeeper • Favored Durability over Performance • Still needed Regions or Datacenters within 50ms • Tried AWS us-east-1, us-west-1, us-west-2
  4. @arupchak PagerDuty Early 2018 • Software deployed to AWS us-west-1,

    us-west-2 and Azure Fresno • ~50 Services across ~10 Engineering teams • Each team owns the entire vertical stack
  5. @arupchak Portability Benefits • Everything is treated as Compute •

    If there is a base Ubuntu image, we can secure and use it • Actually helped in pricing
  6. @arupchak Engineering Culture Benefits • Teams built for Reliability early

    in the SDLC • Teams had deep expertise in their technical stacks (double-edged sword) • Failure Injection / Chaos Engineering
  7. @arupchak Deep Technical Expertise Required • Forced to only use

    common Compute across providers • Every engineer needs to know how to run their own: • Load Balancers • Databases • Applications • HA systems
  8. @arupchak What to consider • Business requirements first, technical requirements

    second • Company buy-in • Engineering staff capabilities • What do your customers care about?