Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
MOM! My algorithms SUCK
Search
Abe Stanway
September 19, 2013
Programming
15
2.8k
MOM! My algorithms SUCK
Given at Monitorama.eu 2013 in Berlin.
http://vimeo.com/75183236
Abe Stanway
September 19, 2013
Tweet
Share
More Decks by Abe Stanway
See All by Abe Stanway
Building Data Driven Organizations
astanway
1
230
A Deep Dive into Monitoring with Skyline
astanway
6
1.9k
Bring the Noise: Continuously Deploying Under a Hailstorm of Metrics
astanway
34
8k
Data Visualization in the Trenches
astanway
5
730
Gifs as Language
astanway
2
880
Your API is a Product
astanway
3
990
Zen and the Art of Writing Commit Logs
astanway
3
850
Other Decks in Programming
See All in Programming
Basic Architectures
denyspoltorak
0
140
クラウドに依存しないS3を使った開発術
simesaba80
0
190
DevFest Android in Korea 2025 - 개발자 커뮤니티를 통해 얻는 가치
wisemuji
0
180
Flutter On-device AI로 완성하는 오프라인 앱, 박제창 @DevFest INCHEON 2025
itsmedreamwalker
1
170
愛される翻訳の秘訣
kishikawakatsumi
3
360
20251212 AI 時代的 Legacy Code 營救術 2025 WebConf
mouson
0
230
AtCoder Conference 2025「LLM時代のAHC」
imjk
2
610
開発に寄りそう自動テストの実現
goyoki
2
1.6k
認証・認可の基本を学ぼう後編
kouyuume
0
260
実は歴史的なアップデートだと思う AWS Interconnect - multicloud
maroon1st
0
280
LLM Çağında Backend Olmak: 10 Milyon Prompt'u Milisaniyede Sorgulamak
selcukusta
0
140
Graviton と Nitro と私
maroon1st
0
150
Featured
See All Featured
The agentic SEO stack - context over prompts
schlessera
0
570
Information Architects: The Missing Link in Design Systems
soysaucechin
0
720
Reflections from 52 weeks, 52 projects
jeffersonlam
355
21k
How to Build an AI Search Optimization Roadmap - Criteria and Steps to Take #SEOIRL
aleyda
1
1.8k
Optimizing for Happiness
mojombo
379
70k
Testing 201, or: Great Expectations
jmmastey
46
7.8k
Primal Persuasion: How to Engage the Brain for Learning That Lasts
tmiket
0
190
Save Time (by Creating Custom Rails Generators)
garrettdimon
PRO
32
1.9k
Dealing with People You Can't Stand - Big Design 2015
cassininazir
367
27k
How STYLIGHT went responsive
nonsquared
100
6k
Stop Working from a Prison Cell
hatefulcrawdad
273
21k
<Decoding/> the Language of Devs - We Love SEO 2024
nikkihalliwell
0
100
Transcript
@abestanway MOM! my algorithms SUCK
i know how to fix monitoring once and for all.
a real human physically staring at a single metric 24/7
that human will then alert a sleeping engineer when her
metric does something weird
Boom. Perfect Monitoring™.
this works because humans are excellent visual pattern matchers* *there
are, of course, many advanced statistical applications where signal cannot be determined from noise just by looking at the data.
can we teach software to be as good at simple
anomaly detection as humans are?
let’s explore.
anomalies = not “normal”
humans can tell what “normal” is by just looking at
a timeseries.
“if a datapoint is not within reasonable bounds, more or
less, of what usually happens, it’s an anomaly” the human definition:
there are real statistics that describe what we mentally approximate
None
“what usually happens” the mean
“more or less” the standard deviation
“reasonable bounds” 3σ
so, in math speak, a metric is anomalous if the
absolute value of latest datapoint is over three standard deviations above the mean
we have essentially derived statistical process control.
pioneered in the 1920s. heavily used in industrial engineering for
quality control on assembly lines.
traditional control charts specification limits
grounded in exchangeability past = future
needs to be stationary
produced by independent random variables, with well- defined expected values
this allows for statistical inference
in other words, you need good lookin’ timeseries for this
to work.
normal distribution: a more concise definition of good lookin’ μ
34.1% 13.6% 2.1% 34.1% 13.6% μ - σ 2.1%
if you’ve got a normal distribution, chances are you’ve got
an exchangeable, stationary series produced by independent random variables
99.7% fall under 3σ
μ 34.1% 13.6% 2.1% 34.1% 13.6% 2.1% μ - σ
if your datapoint is in here, it’s an anomaly.
when only .3% lie above 3σ...
...you get a high signal to noise ratio...
...where “signal” indicates a fundmental state change, as opposed to
a random, improbable variation.
a fundamental state change in the process means a different
probability distribution function that describes the process
determining when probability distribution function shifts have occurred, as early
as possible. anomaly detection:
μ 1
μ 1 a new PDF that describes a new process
drilling holes sawing boards forging steel
snapped drill bit teeth missing on table saw steel, like,
melted
processes with well planned expected values that only suffer small,
random deviances when working properly...
...and massive “deviances”, aka, probability function shifts, when working improperly.
the bad news:
server infrastructures aren’t like assembly lines
systems are active participants in their own design
processes don’t have well defined expected values
they aren’t produced by genuinely independent random variables.
large variance does not necessarily indicate poor quality
they have seasonality
skewed distributions! less than 99.73% of all values lie within
3σ, so breaching 3σ is not necessarily bad 3σ possibly normal range
the dirty secret: using SPC-based algorithms results in lots and
lots of false positives, and probably lots of false negatives as well
no way to retroactively find the false negatives short of
combing with human eyes!
how do we combat this?* *warning! ideas!
we could always use custom fit models...
...after all, as long as the *errors* from the model
are normally distributed, we can use 3σ
Parameters are cool! a pretty decent forecast based on an
artisanal handcrafted model
but fitting models is hard, even by hand.
possible to implement a class of ML algorithms that determine
models based on distribution of errors, using Q-Q plots
Q-Q plots can also be used to determine if the
PDF has changed, although hard to do with limited sample size
consenus: throw lots of different models at a series, hope
it all shakes out.
[yes] [yes] [no] [no] [yes] [yes] = anomaly!
of course, if your models are all SPC-based, this doesn’t
really get you anywhere
use exponentially weighted moving averages to adapt faster
fourier transforms to detect seasonality
second order anomalies: is the series “anomalously anomalous”?
...this is all very hard.
so, we can either change what we expect of monitoring...
...and treat it as a way of building noisy situational
awareness, not absolute directives (alerts)...
...or we can change what we expect out of engineering...
...and construct strict specifications and expected values of all metrics.
neither are going to happen.
so we have to crack this algorithm nut.
...ugh. @abestanway