MEM* (Chapter 1) 2. MEM guidelines and best practices (Chapter 2) What we’re NOT going to see 1. Build your own MEM strategy (Chapter 3), since it’s the most boring part IMHO 2. The Appendix, apart from an image I found useful to explain some notions *MEM stands for Media Effectiveness Measurement Some premises 1. I always learn a lot by preparing a slide deck: Thanks MeasureCamp! 2. I’m just a media measurement marketer wannabe :) 3. The playbook is publicly available here This is for you if you: 1. Haven’t had time to read the paper 2. Think the concepts in the playbook are crucial for Digital Analytics 3. Don’t think Digital Analytics is only data collection Introduction
these tools. But they can work together. For instance, for digital click-based channels: 1. Attribution can be used as the upper bound 2. Incremental experiments can be used as lower bound 3. MMM should fall between the two Fundamentals // Differences between MEM tools
and business decisions into three levels Fundamentals // Planning and portfolio budget allocation* planning optimisation *In the playbook this topic is part of the next chapter
#1 [Planning for early stage] Incrementality experiments for specific channels or campaigns 3rd party tool records a poor ROAS vs the ROAS by 1st party tool An experiment confirms the value of this campaign
#2 [Planning for intermediate/advanced stage] Calibrate attribution results based on incrementality experiments Incremental Impact Attributed Impact Channel 2 seems the best performer However, calibrated iROAS reveals Channel 1 is the best
Incrementality & attribution to validate optimisations The optimisation generated a lower ROAS compared to previous experiment However, the iROAS in the second experiment is higher than the one in the 1st experiment
MMM to calibrate attribution The calibration multiplier is calculated dividing MMM ROAS by DDA ROAS After calibration Display general has shown a higher drop than Search 2,13
Attribution to rule out MMM models These models have similar Mean Absolute Percentage Error so are considered equally accurate Model 1 has consistent lower iCPAs than CPAs - which is not possible - so the model can be discarded
Channels with wide confidence intervals of MMM ROAS should be validated with incrementality experiments b. Budget shifts based on MMM forecasts should be validated with incrementality experiments, if shift > 10% c. Search in MMM should regularly be validated with incrementality experiments, because it’s an always-on media 2. Thresholds a. Discrepancy of MMM results vs. Incrementality experiment < 10% = no need for validation b. Discrepancy of MMM results vs. Incrementality experiment > 10% = calibrate MMM based on incrementality experiments INC & MMM // Incrementality to test MMM results
for calibration after the fact) 1. Calibration multiplier = Incrementality test iROAS / MMM ROAS (as seen in the previous pages) 2. Ruling out models based on the least similar results between MMM ROAS and incrementality test iROAS (as seen in previous pages) INC & MMM // Calibration of MMM via incrementality tests [Planning for advanced stage] Bayesian MMM (allows to incorporate priors about effectiveness of media channels) After an incrementality experiment, the MMM return curve can be calibrated
experiment should be designed with clear methodology and objective in mind. For example: a. Conversion Lift based on geography (in GAds UI) are optimal for calibration b. Conversion Lift based on users (in GAds UI) is the least comparable across MEM tools c. GEO Experiments (open source code) use 1st-party data allowing for any comparison but are resource-intensive 4. An experiment should have a comparable scope. In other terms, there should be parity between the scope of the experiment and the scope of corresponding attribution model or MMM. 1. An experiment should have a clear hypothesis, based on evidence from: a. Attribution or MMM results b. Industry research 2. An experiment should have comparables KPIs, so for instance it’s important to know: a. The amount of sales in attribution depends on the attribution model, the lookback window, etc b. The amount of sales in MMM requires 2-3 years of historical sales c. The amount of sales in incrementality tests depends on the chosen methodology (Conversion Lift, Geo Experiments, etc)
test 1. Strive for simplicity. The business question can be answered with a simple analysis or pre-post test 2. A/B experiment is more adequate. A/B experiments are better suited for testing variations 3. Awareness is the marketing goal. Incrementality tests are based on short-term sales, not enough for measuring awareness 4. There are tech limitations. TV or OOH can’t be easily split by region 5. It is not statistically feasible. The amount of sales is too low to get significant results. Pre-Post vs. Optimisation vs. Incrementality (available in the Appendix)
KPI and its corresponding target to track brand performance, based on: a. Industry research b. Own ratios between brand KPIs and its revenue impact 2. Define the measurement tools to track KPIs, for example: a. Actively collected data i. 3rd-party brand trackers ii. Brand Lift surveys b. Observed data: Share of Search c. Full-funnel MMM with Nested Brand-Equity MMM A Brand KPI is used to add brand awareness to a traditional MMM
Decide the overall budget mix, balancing brand and performance marketing portfolio 2. Plan investment moments using the insights from Share of Search, peers and seasonality should be considered in planning 3. Connect baseline to growth brand performance target to set budgets, in other terms the investments should be calculated to reach goals 4. Use the insights from full-funnel MMM In Q4 Brand SoS loses strength thus the investment plan should consider an increase in budget
[High priority] Boost creatives as they are responsible for at least 50% of average sales effect 2. [Medium priority] Media tactics determine 36% of average sales effect 3. [Low priority] Brand associations and relevancy accounts for 15% of average sales effect