Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Load-testing 101 & Anatomy Of A Load-Generator ...

Load-testing 101 & Anatomy Of A Load-Generator in Node.js

Slides for a talk given at London Node.js User Group on 23/09/2015

NOTE: Minigun has since been renamed to Artillery :) - https://artillery.io

hassy veldstra

September 24, 2015
Tweet

More Decks by hassy veldstra

Other Decks in Programming

Transcript

  1. “Load-­‐‑testing  101, Anatomy  of  a  load-­‐‑generator  in  Node.js, and The

     joys  of  load-­‐‑testing,  or When  you  have  a  great  hammer...” A  Talk  In  3  Acts Hassy  Veldstra  <[email protected]> @hveldstra LNUG,  September  2015
  2. Act  1:  Load-­‐Testing  101 “Load-­‐testing  is  the  process  of  putting

     demand  on  a  software  system   and  measuring  its  response.”  -­‐ Wikipedia
  3. ab

  4. Load-­‐testing  activities 1. Exploratory  benchmarking 1. How  much  overhead  will

     adding  NewRelic APM  add? 2. How  much  overhead  will  this  piece  of  embedded  Lua code  add  to  our   nginx?
  5. Load-­‐testing  activities 1. Exploratory  benchmarking 1. How  much  overhead  will

     adding  NewRelic APM  add? 2. How  much  overhead  will  this  piece  of  embedded  Lua code  add  to  our   nginx? 3. Shall  I  go  with  Heroku for  my  WS-­‐heavy  app?
  6. Load-­‐testing  activities 1. Exploratory  benchmarking 1. How  much  overhead  will

     adding  NewRelic APM  add? 2. How  much  overhead  will  this  piece  of  embedded  Lua code  add  to  our   nginx? 3. Shall  I  go  with  Heroku for  my  WS-­‐heavy  app? 4. External  APIs
  7. Load-­‐testing  activities 1. Exploratory  benchmarking 1. How  much  overhead  will

     adding  NewRelic APM  add? 2. How  much  overhead  will  this  piece  of  embedded  Lua code  add  to  our   nginx? 3. Shall  I  go  with  Heroku for  my  WS-­‐heavy  app? 2. Soak  testing
  8. An  aside! “I'm  not  a  real  programmer.  I  throw  together

     things   until  it  works  then  I  move  on.  The  real  programmers   will  say  "Yeah  it  works  but  you're  leaking  memory   everywhere.  Perhaps  we  should  fix  that."  I’ll  just  restart   Apache  every  10  requests.” -­‐ Rasmus Lerdorf (creator  of  PHP)
  9. Load-­‐testing  activities 1. Exploratory  benchmarking 1. How  much  overhead  will

     adding  NewRelic APM  add? 2. How  much  overhead  will  this  piece  of  embedded  Lua code  add  to  our   nginx? 3. Shall  I  go  with  Heroku for  my  WS-­‐heavy  app? 2. Soak  testing 3. Load  amplification
  10. Load-­‐testing  activities 1. Exploratory  benchmarking 1. How  much  overhead  will

     adding  NewRelic APM  add? 2. How  much  overhead  will  this  piece  of  embedded  Lua code  add  to  our   nginx? 3. Shall  I  go  with  Heroku for  my  WS-­‐heavy  app? 2. Soak  testing 3. Load  amplification 1. Add  extra  load,  in  production,  at  peak
  11. Load-­‐testing  activities 1. Exploratory  benchmarking 1. How  much  overhead  will

     adding  NewRelic APM  add? 2. How  much  overhead  will  this  piece  of  embedded  Lua code  add  to  our   nginx? 3. Shall  I  go  with  Heroku for  my  WS-­‐heavy  app? 2. Soak  testing 3. Load  amplification 1. Add  extra  load,  in  production,  at  peak 2. JustEat do  this  every  day
  12. Load-­‐testing  activities 1. Exploratory  benchmarking 1. How  much  overhead  will

     adding  NewRelic APM  add? 2. How  much  overhead  will  this  piece  of  embedded  Lua code  add  to  our   nginx? 3. Shall  I  go  with  Heroku for  my  WS-­‐heavy  app? 2. Soak  testing 3. Load  amplification 1. Add  extra  load,  in  production,  at  peak 2. JustEat do  this  every  day 3. Trellotoo
  13. Why  Minigun? 1. Node.js! 2. just  `npm install` 3. HTTP

     &  WS  out  of  the  box 4. Request  chaining,  variable  payloads  =  complex  flows
  14. Why  Minigun? 1. Node.js! 2. just  `npm install` 3. HTTP

     &  WS  out  of  the  box 4. Request  chaining,  variable  payloads  =  complex  flows 5. JSON  reports
  15. Why  Minigun? 1. Node.js! 2. just  `npm install` 3. HTTP

     &  WS  out  of  the  box 4. Request  chaining,  variable  payloads  =  complex  flows 5. JSON  reports 6. Graphical  reports
  16. Why  Minigun? 1. Node.js! 2. just  `npm install` 3. HTTP

     &  WS  out  of  the  box 4. Request  chaining,  variable  payloads  =  complex  flows 5. JSON  reports 6. Graphical  reports 7. Plugin  support  (e.g.  StatsD output)
  17. Why  Minigun? 1. Node.js! 2. just  `npm install` 3. HTTP

     &  WS  out  of  the  box 4. Request  chaining,  variable  payloads  =  complex  flows 5. JSON  reports 6. Graphical  reports 7. Plugin  support  (e.g.  StatsD output) 8. CLI  or  library
  18. Why  Minigun? 1. Node.js! 2. just  `npm install` 3. HTTP

     &  WS  out  of  the  box 4. Request  chaining,  variable  payloads  =  complex  flows 5. JSON  reports 6. Graphical  reports 7. Plugin  support  (e.g.  StatsD output) 8. CLI  or  library 9. Decent  performance
  19. <3 • commander  for  CLI • timers  &  event  emitters

    • async <3  <3  <3 • request  <3  and  ws <3
  20. <3 • commander  for  CLI • timers  &  event  emitters

    • async <3  <3  <3 • request  <3  and  ws <3 • measured  for  stats
  21. Sending  lots  of  requests 1. Bottom  up 1. request  spec

     -­‐>  function 1. async.waterfall
  22. Sending  lots  of  requests 1. Bottom  up 1. request  spec

     -­‐>  function 1. async.waterfall
  23. Sending  lots  of  requests 1. Bottom  up 1. request  spec

     -­‐>  function 1. async.waterfall 2. What  creates  the  waterfall? 1. compileScenario()  -­‐>  function 1. async.waterfall with  a  context
  24. Sending  lots  of  requests 1. Bottom  up 1. request  spec

     -­‐>  function 1. async.waterfall 2. What  creates  the  waterfall? 1. compileScenario()  -­‐>  function 1. async.waterfall with  a  context
  25. Sending  lots  of  requests 1. Bottom  up 1. request  spec

     -­‐>  function 1. async.waterfall 2. What  creates  the  waterfall? 1. compileScenario()  -­‐>  function 1. async.waterfall with  a  context 2. One  up  to  Runner
  26. Sending  lots  of  requests 1. Bottom  up 1. request  spec

     -­‐>  function 1. async.waterfall 2. What  creates  the  waterfall? 1. compileScenario()  -­‐>  function 1. async.waterfall with  a  context 2. One  up  to  Runner 1. Each  phaseSpec -­‐>  function
  27. Sending  lots  of  requests 1. Bottom  up 1. request  spec

     -­‐>  function 1. async.waterfall 2. What  creates  the  waterfall? 1. compileScenario()  -­‐>  function 1. async.waterfall with  a  context 2. One  up  to  Runner 1. Each  phaseSpec -­‐>  function 1. interval  running  scenario  functions
  28. Sending  lots  of  requests 1. Bottom  up 1. request  spec

     -­‐>  function 1. async.waterfall 2. What  creates  the  waterfall? 1. compileScenario()  -­‐>  function 1. async.waterfall with  a  context 2. One  up  to  Runner 1. Each  phaseSpec -­‐>  function 1. interval  running  scenario  functions 2. phaseSpec functions  are  run  with  async.series
  29. Sending  lots  of  requests 1. Bottom  up 1. request  spec

     -­‐>  function 1. async.waterfall 2. What  creates  the  waterfall? 1. compileScenario()  -­‐>  function 1. async.waterfall with  a  context 2. One  up  to  Runner 1. Each  phaseSpec -­‐>  function 1. interval  running  scenario  functions 2. phaseSpec functions  are  run  with  async.series 3. Wait  to  finish  once  all  phases  launch
  30. Two  more  things 1. measured 2. plugins 1. require  if

     `config.settings` 2. call  exported  function  with  config
  31. Two  more  things 1. measured 2. plugins 1. require  if

     `config.settings` 2. call  exported  function  with  config 3. and  give  it  an  EE
  32. Performance 1. Unoptimized 1. 500  rps on  entry  level  Digital

     Ocean  droplet 2. `http`  is  ~2.2k 3. ab  is  ~5.9k
  33. Performance 1. Unoptimized 1. 500  rps on  entry  level  Digital

     Ocean  droplet 2. `http`  is  ~2.2k 3. ab  is  ~5.9k 2. Goal:  do  as  little  as  possible
  34. Performance 1. Unoptimized 1. 500  rps on  entry  level  Digital

     Ocean  droplet 2. `http`  is  ~2.2k 3. ab  is  ~5.9k 2. Goal:  do  as  little  as  possible 1. Flamegraphs!
  35. Performance 1. Unoptimized 1. 500  rps on  entry  level  Digital

     Ocean  droplet 2. `http`  is  ~2.2k 3. ab  is  ~5.9k 2. Goal:  do  as  little  as  possible 1. Flamegraphs! 2. perf
  36. Act  III  – The  Joys  Of  Load-­‐Testing 1. Heroku WS

    1. ~6k  open  connections 2. 160  conn/s
  37. Act  III  – The  Joys  Of  Load-­‐Testing 1. Heroku WS

    1. ~6k  open  connections 2. 160  conn/s 3. DigitalOcean is  ~7.2k  &  350  new  conn/sec
  38. Act  III  – The  Joys  Of  Load-­‐Testing "latency": { "min":

    2.3, "max": 138.28, "median": 8.62, "p95": 30.28, "p99": 53.83 }
  39. Act  III  – The  Joys  Of  Load-­‐Testing 1. New  Relic

     agent 1. No  agent 2. With  an  agent
  40. Act  III  – The  Joys  Of  Load-­‐Testing "latency": { "min":

    3.08, "max": 298.44, "median": 23.96, "p95": 122.28, "p99": 264.9 }
  41. Act  III  – The  Joys  Of  Load-­‐Testing 1. New  Relic

     agent 1. No  agent 2. With  an  agent 1. at  the  top  of  the  requires
  42. Act  III  – The  Joys  Of  Load-­‐Testing "latency": { "min":

    9.04, "max": 8659.03, "median": 1447.43, "p95": 8227.31, "p99": 8416.88 }, "errors": { "ESOCKETTIMEDOUT": 190, "ETIMEDOUT": 805 }