How many times have you implemented a clever performance improvement, and maybe put it in production, because it seemed the right thing™ to do, without even measuring the actual consequences of your change? And even if you are measuring, are you using the right tools and interpreting the results correctly? During this deep dive session we will use some examples, taken from real-world situations, to demonstrate how to develop meaningful benchmarks, avoiding the most common, but also often subtle, possible pitfalls and how to correctly interpret their results and taking actions to improve them. In particular we will illustrate how to use JMH for these purposes, explaining why it is the only reliable tool to be used when benchmarking Java applications, and showing what can go horribly wrong if you decide to measure the actual performance of a Java program without it. At the end of this session you will be able to create your own JMH based benchmarks and more important to effectively use their results in order to improve the overall performance of your software.