Upgrade to Pro — share decks privately, control downloads, hide ads and more …

On the path to full Observability with OSS (and...

David
December 11, 2018

On the path to full Observability with OSS (and launch of Loki)

KubeCon 2018 presentation on how to instrument an app with Prometheus and Jaeger, how do debug an app, and about Grafana's new log aggregation solution: Loki.

David

December 11, 2018
Tweet

More Decks by David

Other Decks in Technology

Transcript

  1. I’m David All things UX at Grafana Labs If you

    click and are stuck, reach out to me. [email protected] Twitter: @davkals
  2. App • Classic 3-tiered app • Deployed in Kubernetes •

    It’s running, but how is it doing? Load balancers App servers DB servers
  3. Add instrumentation • Make sure the app logs enough •

    Add Prometheus client library for metrics • Hook up Jaeger for distributed tracing
  4. Structured Logging logger = kitlog.NewLogfmtLogger(kitlog.NewSyncWriter(os.Stderr)) http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request)

    { since := time.Now() defer func() { logger.Log("level", "info", "msg", "query executed OK", "duration", time.Since(since)) }() ... if fail { logger.Log("level", "error", "msg", "query lock timeout") } ... })
  5. Add instrumentation • Make sure the app logs enough •

    Add Prometheus client library for metrics • Hook up Jaeger for distributed tracing
  6. Metrics with Prometheus requestDuration = promauto.NewHistogramVec(prometheus.HistogramOpts{ Name: "request_duration_seconds", Help: "Time

    (in seconds) spent serving HTTP requests", Buckets: prometheus.DefBuckets, }, []string{"method", "route", "status_code"}) func wrap(h http.HandlerFunc) http.HandlerFunc { return func(w http.ResponseWriter, r *http.Request) { m := httpsnoop.CaptureMetrics(h, w, r) requestDuration.WithLabelValues(r.Method, r.URL.Path, strconv.Itoa(m.Code)).Observe(m.Duration.Seconds()) } } http.HandleFunc("/", wrap(func(w http.ResponseWriter, r *http.Request) {}))
  7. Add instrumentation • Make sure the app logs enough •

    Add Prometheus client library for metrics • Hook up Jaeger for distributed tracing
  8. Jaeger Tracing cfg, err := jaegercfg.FromEnv() cfg.InitGlobalTracer("db") http.HandleFunc("/", wrap(func(w http.ResponseWriter,

    r *http.Request) {})) go func() { errc <- http.ListenAndServe(dbPort, nethttp.Middleware(opentracing.GlobalTracer(), http.DefaultServeMux)) }()
  9. Bonus: Set up tools • https://github.com/coreos/prometheus-operator Job to look after

    running Prometheus on Kubernetes and set of configs for all exporters you need to get Kubernetes metrics • https://github.com/grafana/jsonnet-libs/tree/master/prometheus-ksonne t Our configs for running Prometheus, Alertmanager, Grafana together • https://github.com/kubernetes-monitoring/kubernetes-mixin Joint project to unify and improve common alerts for Kubernetes
  10. RED method dashboard of the app • You’ve been paged

    because the p99 latency shot up from <10ms to >700ms • RED method dashboard is ideal entrypoint to see health of the system • Notice also DB error rates, luckily not bubbling up to user
  11. Debug latency issue with Jaeger • Investigate latency issue first

    using Jaeger • App is spending lots of time even though DB request returned quickly • Root cause: backoff period was too high • Idea for fix: lower backoff period
  12. Jump to Explore from dashboard panel • Still need to

    investigate DB errors • Jumping to Explore for query-driven troubleshooting
  13. Explore for query interaction • Explore pre-filled the query from

    the dashboard • Interact with the query with smart tab completion • Break down by “instance” to check which DB instance is producing errors
  14. Explore for query interaction • Breakdown by instance shows single

    instance producing 500s (error status code) • Click on instance label to narrow down further
  15. Explore for query interaction • Instance label is now part

    of the query selector • We’ve isolated the DB instance and see only its metrics • Now we can split the view and select the logging datasource
  16. Metrics and logs side-by-side • Right side switch over a

    logging datasource • Logging query retains the Prometheus query labels to select the log stream
  17. Explore for query interaction • Filter for log level error

    using the graph legend • Ad-hoc stats on structured log fields • Root cause found: “Too many open connections” • Idea for fix: more DB replicas, or connection pooling
  18. More goals • Logs should be cheap! • We found

    existing solutions are hard to scale • We didn’t need full text indexing • Do ad-hoc analysis in the browser
  19. Like Prometheus, but for logs • Prometheus-style service discovery of

    logging targets • Labels are indexed as metadata, e.g.: {job=”app1”}
  20. See Loki logs inside Grafana • New builtin Loki datasource

    • Prometheus-style stream selector • Regexp filtering by the backend • Simple UI: ◦ no paging ◦ return and render 1000 rows by default ◦ Use the power of Cmd+F
  21. See Loki logs inside Grafana • Various dedup options •

    In-browser line parsing support for JSON and logfmt • Ad-hoc stats across returned results (up to 1000 rows by default) • Coming soon: ad-hoc graphs based on parsed numbers
  22. Enable Explore UI (BETA) Logging UI is behind feature flag.

    To enable, edit Grafana config.ini file [explore] enabled = true Explore will be released in Grafana v6.0 (Feb 2019) Loki can be used today Feedback welcome: @davkals or [email protected]
  23. Integrate Tracing • Associate traces with logs and metrics •

    Labels and Exemplars FTW • Aiming for Q2 2019