When measuring web performance, we often try to get a single number that we can trend over time. This may be the median page load time, hero image time, page speed score, or core web vitals score. But is it really that simple?
Users seldom visit just a single page on a site, so how do we account for varying performance across multiple pages? How do we tell which page’s performance impacts the overall user experience? How do various cognitive biases affect the user’s perception of our site’s performance?
As developers and data analysts, we have our own biases that affect how we look at the data and which problems we end up trying to solve. Often our measurements themselves may be affected by our confirmation bias.
In this talk, we go into different biases that may affect user perception as well as our ability to measure that perception, and ways in which to identify if our data exhibits these patterns.
Presented at: iJS Munich