Upgrade to Pro — share decks privately, control downloads, hide ads and more …

CNCF Webinar Continuous Profiling Go Applicatio...

CNCF Webinar Continuous Profiling Go Application Running in Kubernetes

Microservices and Kubernetes help our architecture to scale and to be independent at the price of running many more applications. Golang provides a powerful profiling tool called pprof, it is useful to collect information from a running binary for future investigation. The problem is that you are not always there to take a profile when needed, sometimes you do not even know when you need to one, that’s how a continuous profiling strategy helps. Profefe is an open-source project that collect and organizes profiles. Gianluca wrote a project called kube-profefe to integrate Kubernetes with Profefe. Kube-profefe contains a kubectl plugin to capture locally or on profefe profiles from running pods in Kubernetes. It also provides an operator to discover and continuously profile applications running inside Pods.

Gianluca Arbezzano

March 27, 2020
Tweet

More Decks by Gianluca Arbezzano

Other Decks in Programming

Transcript

  1. @gianarb / gianarb.it $ go tool pprof http://localhost:14271/debug/pprof/allocs?debug=1 Fetching profile

    over HTTP from http://localhost:14271/debug/pprof/allocs?debug=1 Saved profile in /home/gianarb/pprof/pprof.alloc_objects.alloc_space.inuse_objects.inuse_space.001.pb.gz Type: inuse_space Entering interactive mode (type "help" for commands, "o" for options) (pprof) text Showing nodes accounting for 1056.92kB, 100% of 1056.92kB total Showing top 10 nodes out of 21 flat flat% sum% cum cum% 544.67kB 51.53% 51.53% 544.67kB 51.53% github.com/jaegertracing/jaeger/vendor/google.golang.org/grpc/internal/transport.newBufWriter 512.25kB 48.47% 100% 512.25kB 48.47% time.startTimer 0 0% 100% 512.25kB 48.47% github.com/jaegertracing/jaeger/cmd/agent/app/processors.(*ThriftProcessor).processBuffer 0 0% 100% 512.25kB 48.47% github.com/jaegertracing/jaeger/cmd/agent/app/processors.NewThriftProcessor.func2 0 0% 100% 512.25kB 48.47% github.com/jaegertracing/jaeger/cmd/agent/app/reporter.(*MetricsReporter).EmitBatch 0 0% 100% 512.25kB 48.47% github.com/jaegertracing/jaeger/cmd/agent/app/reporter/grpc.(*Reporter).EmitBatch 0 0% 100% 512.25kB 48.47% github.com/jaegertracing/jaeger/cmd/agent/app/reporter/grpc.(*Reporter).send 0 0% 100% 512.25kB 48.47% github.com/jaegertracing/jaeger/proto-gen/api_v2.(*collectorServiceClient).PostSpans 0 0% 100% 512.25kB 48.47% github.com/jaegertracing/jaeger/thrift-gen/jaeger.(*AgentProcessor).Process 0 0% 100% 512.25kB 48.47% github.com/jaegertracing/jaeger/thrift-gen/jaeger.(*agentProcessorEmitBatch).Process
  2. @gianarb / gianarb.it Gianluca Arbezzano Software Engineer sold to reliability

    @InfluxData • https://gianarb.it • @gianarb What I like: • I make dirty hacks that look awesome • I grow my vegetables • Travel for fun and work
  3. @gianarb / gianarb.it The common way is by bothering who

    better knows IPs and how to connect to prod
  4. @gianarb / gianarb.it But it is not a SWE fault

    because they do not have a good way to retrieve what they need to be effective at their work.
  5. @gianarb / gianarb.it you never know when you will need

    a profile, and for what or from where
  6. @gianarb / gianarb.it Let’s summarize issues • Developer are usually

    the profile stakeholder • Production is not always a comfortable place to interact with • You do not know when you will need a profile, it will may be from 2 weeks ago • Cloud, Kubernetes increases the amount of noise. A lot more binaries, they go up and down continuously. Containers that OOMs gets restarted transparency, there is a lot of postmortem analysis going on
  7. @gianarb / gianarb.it Are you ready to know a possible

    solution? Spoiler Alert: it is part of the title
  8. @gianarb / gianarb.it The pull based solution was easier to

    implement for us: • Too many applications to re-instrument with the sdk • Our services already expose pprof http handler by default
  9. @gianarb / gianarb.it APP APP APP APP APP APP APP

    APP APP APP APP APP APP APP APP
  10. @gianarb / gianarb.it Let’s make a cronjob that uses the

    k8s api github.com/profefe/kube-profefe
  11. @gianarb / gianarb.it Cool things: Merge profile go tool pprof

    'http://repo.pprof.cluster.local:10100/api/0/profiles/merge?service=auth&type=cpu&from=2019-05-30T11:49:00&to=2019 -05-30T12:49:00&labels=version=1.0.0' Fetching profile over HTTP from http://localhost:10100/api/0/profiles... Type: cpu Entering interactive mode (type "help" for commands, "o" for options) (pprof) top Showing nodes accounting for 43080ms, 99.15% of 43450ms total Dropped 53 nodes (cum <= 217.25ms) Showing top 10 nodes out of 12 flat flat% sum% cum cum% 42220ms 97.17% 97.17% 42220ms 97.17% main.load 860ms 1.98% 99.15% 860ms 1.98% runtime.nanotime 0 0% 99.15% 21050ms 48.45% main.bar 0 0% 99.15% 21170ms 48.72% main.baz
  12. @gianarb / gianarb.it Analyze pprof profiles • Easy correlation with

    other metrics such as mem/cpu usage • All those profiles contains useful information • Cross service utilization for performance optimization ◦ Give me the top 10 cpu intensive function in all system • Building bridges between dev and ops