Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
MOM! My algorithms SUCK
Search
Abe Stanway
September 19, 2013
Programming
15
2.7k
MOM! My algorithms SUCK
Given at Monitorama.eu 2013 in Berlin.
http://vimeo.com/75183236
Abe Stanway
September 19, 2013
Tweet
Share
More Decks by Abe Stanway
See All by Abe Stanway
Building Data Driven Organizations
astanway
1
200
A Deep Dive into Monitoring with Skyline
astanway
6
1.8k
Bring the Noise: Continuously Deploying Under a Hailstorm of Metrics
astanway
34
7.8k
Data Visualization in the Trenches
astanway
5
670
Gifs as Language
astanway
2
740
Your API is a Product
astanway
3
930
Zen and the Art of Writing Commit Logs
astanway
3
770
Other Decks in Programming
See All in Programming
コンテナをたくさん詰め込んだシステムとランタイムの変化
makihiro
1
130
Fibonacci Function Gallery - Part 1
philipschwarz
PRO
0
220
Effective Signals in Angular 19+: Rules and Helpers @ngbe2024
manfredsteyer
PRO
0
140
ある日突然あなたが管理しているサーバーにDDoSが来たらどうなるでしょう?知ってるようで何も知らなかったDDoS攻撃と対策 #phpcon.2024
akase244
0
100
Scalaから始めるOpenFeature入門 / Scalaわいわい勉強会 #4
arthur1
1
330
ブラウザ単体でmp4書き出すまで - muddy-web - 2024-12
yue4u
3
470
クリエイティブコーディングとRuby学習 / Creative Coding and Learning Ruby
chobishiba
0
3.9k
range over funcの使い道と非同期N+1リゾルバーの夢 / about a range over func
mackee
0
110
rails statsで大解剖 🔍 “B/43流” のRailsの育て方を歴史とともに振り返ります
shoheimitani
2
940
Итераторы в Go 1.23: зачем они нужны, как использовать, и насколько они быстрые?
lamodatech
0
800
競技プログラミングへのお誘い@阪大BOOSTセミナー
kotamanegi
0
360
「とりあえず動く」コードはよい、「読みやすい」コードはもっとよい / Code that 'just works' is good, but code that is 'readable' is even better.
mkmk884
3
440
Featured
See All Featured
Unsuck your backbone
ammeep
669
57k
Helping Users Find Their Own Way: Creating Modern Search Experiences
danielanewman
29
2.3k
[RailsConf 2023] Rails as a piece of cake
palkan
53
5k
Put a Button on it: Removing Barriers to Going Fast.
kastner
59
3.6k
Performance Is Good for Brains [We Love Speed 2024]
tammyeverts
6
520
Save Time (by Creating Custom Rails Generators)
garrettdimon
PRO
28
900
How STYLIGHT went responsive
nonsquared
95
5.2k
RailsConf 2023
tenderlove
29
940
JavaScript: Past, Present, and Future - NDC Porto 2020
reverentgeek
47
5.1k
4 Signs Your Business is Dying
shpigford
181
21k
Building a Scalable Design System with Sketch
lauravandoore
460
33k
CoffeeScript is Beautiful & I Never Want to Write Plain JavaScript Again
sstephenson
159
15k
Transcript
@abestanway MOM! my algorithms SUCK
i know how to fix monitoring once and for all.
a real human physically staring at a single metric 24/7
that human will then alert a sleeping engineer when her
metric does something weird
Boom. Perfect Monitoring™.
this works because humans are excellent visual pattern matchers* *there
are, of course, many advanced statistical applications where signal cannot be determined from noise just by looking at the data.
can we teach software to be as good at simple
anomaly detection as humans are?
let’s explore.
anomalies = not “normal”
humans can tell what “normal” is by just looking at
a timeseries.
“if a datapoint is not within reasonable bounds, more or
less, of what usually happens, it’s an anomaly” the human definition:
there are real statistics that describe what we mentally approximate
None
“what usually happens” the mean
“more or less” the standard deviation
“reasonable bounds” 3σ
so, in math speak, a metric is anomalous if the
absolute value of latest datapoint is over three standard deviations above the mean
we have essentially derived statistical process control.
pioneered in the 1920s. heavily used in industrial engineering for
quality control on assembly lines.
traditional control charts specification limits
grounded in exchangeability past = future
needs to be stationary
produced by independent random variables, with well- defined expected values
this allows for statistical inference
in other words, you need good lookin’ timeseries for this
to work.
normal distribution: a more concise definition of good lookin’ μ
34.1% 13.6% 2.1% 34.1% 13.6% μ - σ 2.1%
if you’ve got a normal distribution, chances are you’ve got
an exchangeable, stationary series produced by independent random variables
99.7% fall under 3σ
μ 34.1% 13.6% 2.1% 34.1% 13.6% 2.1% μ - σ
if your datapoint is in here, it’s an anomaly.
when only .3% lie above 3σ...
...you get a high signal to noise ratio...
...where “signal” indicates a fundmental state change, as opposed to
a random, improbable variation.
a fundamental state change in the process means a different
probability distribution function that describes the process
determining when probability distribution function shifts have occurred, as early
as possible. anomaly detection:
μ 1
μ 1 a new PDF that describes a new process
drilling holes sawing boards forging steel
snapped drill bit teeth missing on table saw steel, like,
melted
processes with well planned expected values that only suffer small,
random deviances when working properly...
...and massive “deviances”, aka, probability function shifts, when working improperly.
the bad news:
server infrastructures aren’t like assembly lines
systems are active participants in their own design
processes don’t have well defined expected values
they aren’t produced by genuinely independent random variables.
large variance does not necessarily indicate poor quality
they have seasonality
skewed distributions! less than 99.73% of all values lie within
3σ, so breaching 3σ is not necessarily bad 3σ possibly normal range
the dirty secret: using SPC-based algorithms results in lots and
lots of false positives, and probably lots of false negatives as well
no way to retroactively find the false negatives short of
combing with human eyes!
how do we combat this?* *warning! ideas!
we could always use custom fit models...
...after all, as long as the *errors* from the model
are normally distributed, we can use 3σ
Parameters are cool! a pretty decent forecast based on an
artisanal handcrafted model
but fitting models is hard, even by hand.
possible to implement a class of ML algorithms that determine
models based on distribution of errors, using Q-Q plots
Q-Q plots can also be used to determine if the
PDF has changed, although hard to do with limited sample size
consenus: throw lots of different models at a series, hope
it all shakes out.
[yes] [yes] [no] [no] [yes] [yes] = anomaly!
of course, if your models are all SPC-based, this doesn’t
really get you anywhere
use exponentially weighted moving averages to adapt faster
fourier transforms to detect seasonality
second order anomalies: is the series “anomalously anomalous”?
...this is all very hard.
so, we can either change what we expect of monitoring...
...and treat it as a way of building noisy situational
awareness, not absolute directives (alerts)...
...or we can change what we expect out of engineering...
...and construct strict specifications and expected values of all metrics.
neither are going to happen.
so we have to crack this algorithm nut.
...ugh. @abestanway