Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Get Instrumented: How Prometheus Can Unify Your...
Search
Hynek Schlawack
May 31, 2016
Programming
4
11k
Get Instrumented: How Prometheus Can Unify Your Metrics
Hynek Schlawack
May 31, 2016
Tweet
Share
More Decks by Hynek Schlawack
See All by Hynek Schlawack
Subclassing, Composition, Python, and You
hynek
3
240
Classy Abstractions @ Python Web Conf
hynek
0
180
On the Meaning of Version Numbers
hynek
0
290
Maintaining a Python Project When It’s Not Your Job
hynek
1
2.4k
How to Write Deployment-friendly Applications
hynek
0
2.5k
Solid Snakes or: How to Take 5 Weeks of Vacation
hynek
2
5.8k
Beyond grep – PyCon JP
hynek
1
3.3k
Beyond grep – EuroPython Edition
hynek
1
10k
Beyond grep: Practical Logging and Metrics
hynek
3
1.2k
Other Decks in Programming
See All in Programming
Fibonacci Function Gallery - Part 1
philipschwarz
PRO
0
220
Haze - Real time background blurring
chrisbanes
1
520
開発者とQAの越境で自動テストが増える開発プロセスを実現する
92thunder
1
190
StarlingMonkeyを触ってみた話 - 2024冬
syumai
3
270
PHPとAPI Platformで作る本格的なWeb APIアプリケーション(入門編) / phpcon 2024 Intro to API Platform
ttskch
0
270
ドメインイベント増えすぎ問題
h0r15h0
2
360
コンテナをたくさん詰め込んだシステムとランタイムの変化
makihiro
1
140
KubeCon + CloudNativeCon NA 2024 Overviewat Kubernetes Meetup Tokyo #68 / amsy810_k8sjp68
masayaaoyama
0
260
アクターシステムに頼らずEvent Sourcingする方法について
j5ik2o
4
290
責務を分離するための例外設計 - PHPカンファレンス 2024
kajitack
6
1.5k
生成AIでGitHubソースコード取得して仕様書を作成
shukob
0
480
tidymodelsによるtidyな生存時間解析 / Japan.R2024
dropout009
1
790
Featured
See All Featured
Bash Introduction
62gerente
608
210k
The Straight Up "How To Draw Better" Workshop
denniskardys
232
140k
Building Applications with DynamoDB
mza
91
6.1k
Automating Front-end Workflow
addyosmani
1366
200k
Speed Design
sergeychernyshev
25
670
Reflections from 52 weeks, 52 projects
jeffersonlam
347
20k
Creating an realtime collaboration tool: Agile Flush - .NET Oxford
marcduiker
26
1.9k
Into the Great Unknown - MozCon
thekraken
33
1.5k
Facilitating Awesome Meetings
lara
50
6.1k
Building Better People: How to give real-time feedback that sticks.
wjessup
365
19k
The Psychology of Web Performance [Beyond Tellerrand 2023]
tammyeverts
45
2.2k
Mobile First: as difficult as doing things right
swwweet
222
9k
Transcript
Hynek Schlawack Get Instrumented How Prometheus Can Unify Your Metrics
Goals
Goals
Goals
Goals
Goals
Service Level
Service Level Indicator
Service Level Indicator Objective
Service Level Indicator Objective (Agreement)
Metrics
Metrics avg latency 0.3 0.5 0.8 1.1 2.6
Metrics 12:00 12:01 12:02 12:03 12:04 avg latency 0.3 0.5
0.8 1.1 2.6
Metrics 12:00 12:01 12:02 12:03 12:04 avg latency 0.3 0.5
0.8 1.1 2.6 server load 0.3 1.0 2.3 3.5 5.2
None
Instrument
Instrument
Instrument
Instrument
Instrument
None
None
Metric Types
Metric Types ❖counter
Metric Types ❖counter ❖gauge
Metric Types ❖counter ❖gauge ❖summary
Metric Types ❖counter ❖gauge ❖summary ❖histogram
Metric Types ❖counter ❖gauge ❖summary ❖histogram ❖ buckets (1s, 0.5s,
0.25, …)
Averages
❖ avg(request time) ≠ avg(UX) Averages
❖ avg(request time) ≠ avg(UX) ❖ avg({1, 1, 1, 1,
10}) = 2.8 Averages
❖ avg(request time) ≠ avg(UX) ❖ avg({1, 1, 1, 1,
10}) = 2.8 Averages
❖ avg(request time) ≠ avg(UX) ❖ avg({1, 1, 1, 1,
10}) = 2.8 Averages
❖ avg(request time) ≠ avg(UX) ❖ avg({1, 1, 1, 1,
10}) = 2.8 ❖ median({1, 1, 1, 1, 10}) = 1 Averages
❖ avg(request time) ≠ avg(UX) ❖ avg({1, 1, 1, 1,
10}) = 2.8 ❖ median({1, 1, 1, 1, 10}) = 1 Averages
❖ avg(request time) ≠ avg(UX) ❖ avg({1, 1, 1, 1,
10}) = 2.8 ❖ median({1, 1, 1, 1, 10}) = 1 ❖ median({1, 1, 100_000}) = 1 Averages
Percentiles
Percentiles nth percentile P of a data set = P
≥ n% of values
None
50th percentile = 1 ms
50th percentile = 1 ms 50% of requests done by
1 ms
Percentiles
Percentiles P {1, 1, 100_000} 50th 1
Percentiles P {1, 1, 100_000} 50th 1 95th 90_000
None
None
None
Naming
Naming backend1_app_http_reqs_msgs_post backend1_app_http_reqs_msgs_get …
Naming backend1_app_http_reqs_msgs_post backend1_app_http_reqs_msgs_get … app_http_reqs_total
Naming backend1_app_http_reqs_msgs_post backend1_app_http_reqs_msgs_get … app_http_reqs_total
Naming backend1_app_http_reqs_msgs_post backend1_app_http_reqs_msgs_get … app_http_reqs_total
Naming backend1_app_http_reqs_msgs_post backend1_app_http_reqs_msgs_get … app_http_reqs_total{meth="POST", path="/msgs", backend="1"} app_http_reqs_total{meth="GET", path="/msgs", backend="1"}
… app_http_reqs_total
None
None
1. resolution = scraping interval
1. resolution = scraping interval 2. missing scrapes = less
resolution
Pull: Problems ❖ short lived jobs
None
Pull: Problems ❖ short lived jobs ❖ target discovery
Configuration scrape_configs: - job_name: 'prometheus' target_groups: - targets: - 'localhost:9090'
Configuration scrape_configs: - job_name: 'prometheus' target_groups: - targets: - 'localhost:9090'
Configuration scrape_configs: - job_name: 'prometheus' target_groups: - targets: - 'localhost:9090'
Configuration scrape_configs: - job_name: 'prometheus' target_groups: - targets: - 'localhost:9090'
{instance="localhost:9090",job="prometheus"}
None
Pull: Problems ❖ target discovery ❖ short lived jobs ❖
Heroku/NATed systems
Pull: Advantages
Pull: Advantages ❖ multiple Prometheis easy
Pull: Advantages ❖ multiple Prometheis easy ❖ outage detection
Pull: Advantages ❖ multiple Prometheis easy ❖ outage detection ❖
predictable, no self-DoS
Pull: Advantages ❖ multiple Prometheis easy ❖ outage detection ❖
predictable, no self-DoS ❖ easy to instrument 3rd parties
Metrics Format # HELP req_seconds Time spent \ processing a
request in seconds. # TYPE req_seconds histogram req_seconds_count 390.0 req_seconds_sum 177.0319407
Metrics Format # HELP req_seconds Time spent \ processing a
request in seconds. # TYPE req_seconds histogram req_seconds_count 390.0 req_seconds_sum 177.0319407
Metrics Format # HELP req_seconds Time spent \ processing a
request in seconds. # TYPE req_seconds histogram req_seconds_count 390.0 req_seconds_sum 177.0319407
Metrics Format # HELP req_seconds Time spent \ processing a
request in seconds. # TYPE req_seconds histogram req_seconds_count 390.0 req_seconds_sum 177.0319407
Metrics Format # HELP req_seconds Time spent \ processing a
request in seconds. # TYPE req_seconds histogram req_seconds_count 390.0 req_seconds_sum 177.0319407
Percentiles req_seconds_bucket{le="0.05"} 0.0 req_seconds_bucket{le="0.25"} 1.0 req_seconds_bucket{le="0.5"} 273.0 req_seconds_bucket{le="0.75"} 369.0 req_seconds_bucket{le="1.0"}
388.0 req_seconds_bucket{le="2.0"} 390.0 req_seconds_bucket{le="+Inf"} 390.0
Percentiles req_seconds_bucket{le="0.05"} 0.0 req_seconds_bucket{le="0.25"} 1.0 req_seconds_bucket{le="0.5"} 273.0 req_seconds_bucket{le="0.75"} 369.0 req_seconds_bucket{le="1.0"}
388.0 req_seconds_bucket{le="2.0"} 390.0 req_seconds_bucket{le="+Inf"} 390.0
Percentiles req_seconds_bucket{le="0.05"} 0.0 req_seconds_bucket{le="0.25"} 1.0 req_seconds_bucket{le="0.5"} 273.0 req_seconds_bucket{le="0.75"} 369.0 req_seconds_bucket{le="1.0"}
388.0 req_seconds_bucket{le="2.0"} 390.0 req_seconds_bucket{le="+Inf"} 390.0
None
Aggregation
Aggregation sum( rate( req_seconds_count[1m] ) )
Aggregation sum( rate( req_seconds_count[1m] ) )
Aggregation sum( rate( req_seconds_count[1m] ) )
Aggregation sum( rate( req_seconds_count[1m] ) )
Aggregation sum( rate( req_seconds_count{dc="west"}[1m] ) )
Aggregation sum( rate( req_seconds_count[1m] ) ) by (dc)
Percentiles histogram_quantile( 0.9, rate( req_seconds_bucket[10m] ))
Percentiles histogram_quantile( 0.9, rate( req_seconds_bucket[10m] ))
Percentiles histogram_quantile( 0.9, rate( req_seconds_bucket[10m] ))
Percentiles histogram_quantile( 0.9, rate( req_seconds_bucket[10m] ))
Percentiles histogram_quantile( 0.9, rate( req_seconds_bucket[10m] ))
None
None
Internal
Internal ❖ great for ad-hoc
Internal ❖ great for ad-hoc ❖ 1 expr per graph
Internal ❖ great for ad-hoc ❖ 1 expr per graph
❖ templating
PromDash
PromDash ❖ best integration
PromDash ❖ best integration ❖ former official
PromDash ❖ best integration ❖ former official ❖ now deprecated
❖ don’t bother
Grafana
Grafana ❖ pretty & powerful
Grafana ❖ pretty & powerful ❖ many integrations
Grafana ❖ pretty & powerful ❖ many integrations ❖ mix
and match!
Grafana ❖ pretty & powerful ❖ many integrations ❖ mix
and match! ❖ use this!
None
Alerts & Scrying
Alerts & Scrying ALERT DiskWillFillIn4Hours IF predict_linear( node_filesystem_free[1h], 4*3600) <
0 FOR 5m
Alerts & Scrying ALERT DiskWillFillIn4Hours IF predict_linear( node_filesystem_free[1h], 4*3600) <
0 FOR 5m
Alerts & Scrying ALERT DiskWillFillIn4Hours IF predict_linear( node_filesystem_free[1h], 4*3600) <
0 FOR 5m
Alerts & Scrying ALERT DiskWillFillIn4Hours IF predict_linear( node_filesystem_free[1h], 4*3600) <
0 FOR 5m
Alerts & Scrying ALERT DiskWillFillIn4Hours IF predict_linear( node_filesystem_free[1h], 4*3600) <
0 FOR 5m
Alerts & Scrying ALERT DiskWillFillIn4Hours IF predict_linear( node_filesystem_free[1h], 4*3600) <
0 FOR 5m
None
None
None
Environment
None
Apache nginx Django PostgreSQL MySQL MongoDB CouchDB redis Varnish etcd
Kubernetes Consul collectd HAProxy statsd graphite InfluxDB SNMP
Apache nginx Django PostgreSQL MySQL MongoDB CouchDB redis Varnish etcd
Kubernetes Consul collectd HAProxy statsd graphite InfluxDB SNMP
node_exporter
node_exporter cAdvisor
System Insight
System Insight ❖ load
System Insight ❖ load ❖ procs
System Insight ❖ load ❖ procs ❖ memory
System Insight ❖ load ❖ procs ❖ memory ❖ network
System Insight ❖ load ❖ procs ❖ memory ❖ network
❖ disk
System Insight ❖ load ❖ procs ❖ memory ❖ network
❖ disk ❖ I/O
mtail
mtail ❖ follow (log) files
mtail ❖ follow (log) files ❖ extract metrics using regex
mtail ❖ follow (log) files ❖ extract metrics using regex
❖ can be better than direct
Moar
Moar ❖ Edges: web servers/HAProxy
Moar ❖ Edges: web servers/HAProxy ❖ black box
Moar ❖ Edges: web servers/HAProxy ❖ black box ❖ databases
Moar ❖ Edges: web servers/HAProxy ❖ black box ❖ databases
❖ network
So Far
So Far ❖ system stats
So Far ❖ system stats ❖ outside look
So Far ❖ system stats ❖ outside look ❖ 3rd
party components
Code
cat-or.not
cat-or.not ❖ HTTP service
cat-or.not ❖ HTTP service ❖ upload picture
cat-or.not ❖ HTTP service ❖ upload picture ❖ meow!/nope meow!
from flask import Flask, g, request from cat_or_not import is_cat
app = Flask(__name__) @app.route("/analyze", methods=["POST"]) def analyze(): g.auth.check(request) return ("meow!" if is_cat(request.files["pic"]) else "nope!") if __name__ == "__main__": app.run()
from flask import Flask, g, request from cat_or_not import is_cat
app = Flask(__name__) @app.route("/analyze", methods=["POST"]) def analyze(): g.auth.check(request) return ("meow!" if is_cat(request.files["pic"]) else "nope!") if __name__ == "__main__": app.run()
from flask import Flask, g, request from cat_or_not import is_cat
app = Flask(__name__) @app.route("/analyze", methods=["POST"]) def analyze(): g.auth.check(request) return ("meow!" if is_cat(request.files["pic"]) else "nope!") if __name__ == "__main__": app.run()
pip install prometheus_client
from prometheus_client import \ start_http_server # … if __name__ ==
"__main__": start_http_server(8000) app.run()
process_virtual_memory_bytes 156393472.0 process_resident_memory_bytes 20480000.0 process_start_time_seconds 1460214325.21 process_cpu_seconds_total 0.169999999998 process_open_fds 8.0
process_max_fds 1024.0
process_virtual_memory_bytes 156393472.0 process_resident_memory_bytes 20480000.0 process_start_time_seconds 1460214325.21 process_cpu_seconds_total 0.169999999998 process_open_fds 8.0
process_max_fds 1024.0
process_virtual_memory_bytes 156393472.0 process_resident_memory_bytes 20480000.0 process_start_time_seconds 1460214325.21 process_cpu_seconds_total 0.169999999998 process_open_fds 8.0
process_max_fds 1024.0
process_virtual_memory_bytes 156393472.0 process_resident_memory_bytes 20480000.0 process_start_time_seconds 1460214325.21 process_cpu_seconds_total 0.169999999998 process_open_fds 8.0
process_max_fds 1024.0
process_virtual_memory_bytes 156393472.0 process_resident_memory_bytes 20480000.0 process_start_time_seconds 1460214325.21 process_cpu_seconds_total 0.169999999998 process_open_fds 8.0
process_max_fds 1024.0
process_virtual_memory_bytes 156393472.0 process_resident_memory_bytes 20480000.0 process_start_time_seconds 1460214325.21 process_cpu_seconds_total 0.169999999998 process_open_fds 8.0
process_max_fds 1024.0
None
from prometheus_client import \ Histogram, Gauge REQUEST_TIME = Histogram( "cat_or_not_request_seconds",
"Time spent in HTTP requests.")
from prometheus_client import \ Histogram, Gauge REQUEST_TIME = Histogram( "cat_or_not_request_seconds",
"Time spent in HTTP requests.") ANALYZE_TIME = Histogram( "cat_or_not_analyze_seconds", "Time spent analyzing pictures.")
from prometheus_client import \ Histogram, Gauge REQUEST_TIME = Histogram( "cat_or_not_request_seconds",
"Time spent in HTTP requests.") ANALYZE_TIME = Histogram( "cat_or_not_analyze_seconds", "Time spent analyzing pictures.") IN_PROGRESS = Gauge( "cat_or_not_in_progress_requests", "Number of requests in progress.")
@IN_PROGRESS.track_inprogress() @REQUEST_TIME.time() @app.route("/analyze", methods=["POST"]) def analyze(): g.auth.check(request) with ANALYZE_TIME.time(): result
= is_cat( request.files["pic"].stream) return "meow!" if result else "nope!"
@IN_PROGRESS.track_inprogress() @REQUEST_TIME.time() @app.route("/analyze", methods=["POST"]) def analyze(): g.auth.check(request) with ANALYZE_TIME.time(): result
= is_cat( request.files["pic"].stream) return "meow!" if result else "nope!"
AUTH_TIME = Histogram("auth_seconds", "Time spent authenticating.") AUTH_ERRS = Counter("auth_errors_total", "Errors
while authing.") AUTH_WRONG_CREDS = Counter("auth_wrong_creds_total", "Wrong credentials.") class Auth: # ... @AUTH_TIME.time() def auth(self, request): while True: try: return self._auth(request) except WrongCredsError: AUTH_WRONG_CREDS.inc() raise except Exception: AUTH_ERRS.inc()
AUTH_TIME = Histogram("auth_seconds", "Time spent authenticating.") AUTH_ERRS = Counter("auth_errors_total", "Errors
while authing.") AUTH_WRONG_CREDS = Counter("auth_wrong_creds_total", "Wrong credentials.") class Auth: # ... @AUTH_TIME.time() def auth(self, request): while True: try: return self._auth(request) except WrongCredsError: AUTH_WRONG_CREDS.inc() raise except Exception: AUTH_ERRS.inc()
AUTH_TIME = Histogram("auth_seconds", "Time spent authenticating.") AUTH_ERRS = Counter("auth_errors_total", "Errors
while authing.") AUTH_WRONG_CREDS = Counter("auth_wrong_creds_total", "Wrong credentials.") class Auth: # ... @AUTH_TIME.time() def auth(self, request): while True: try: return self._auth(request) except WrongCredsError: AUTH_WRONG_CREDS.inc() raise except Exception: AUTH_ERRS.inc()
AUTH_TIME = Histogram("auth_seconds", "Time spent authenticating.") AUTH_ERRS = Counter("auth_errors_total", "Errors
while authing.") AUTH_WRONG_CREDS = Counter("auth_wrong_creds_total", "Wrong credentials.") class Auth: # ... @AUTH_TIME.time() def auth(self, request): while True: try: return self._auth(request) except WrongCredsError: AUTH_WRONG_CREDS.inc() raise except Exception: AUTH_ERRS.inc()
@app.route("/analyze", methods=["POST"]) def analyze(): g.auth.check(request) with ANALYZE_TIME.time(): result = is_cat(
request.files["pic"].stream) return "meow!" if result else "nope!"
pip install prometheus_async
Wrapper from prometheus_async.aio import time @time(REQUEST_TIME) async def view(request): #
...
Goodies
Goodies ❖ aiohttp-based metrics export
Goodies ❖ aiohttp-based metrics export ❖ also in thread!
Goodies ❖ aiohttp-based metrics export ❖ also in thread! ❖
Consul Agent integration
Wrap Up
Wrap Up
Wrap Up ✓
Wrap Up ✓ ✓
Wrap Up ✓ ✓ ✓
ox.cx/p @hynek vrmd.de