Every retail media campaign ends with a number. Usually it's ROAS. Sometimes it's uplift. Occasionally, it's incremental sales.
But here's the thing nobody wants to talk about: that number is only as good as the baseline it's compared to.
What is Baseline in Retail Media?
A baseline is what would have happened without the campaign. Not what happened before. Not last year's number. Not the average.
It's the counterfactual. The scenario where the ad never ran - but everything else stayed the same.
That distinction matters more than most people realise.
Why is it the first number that finance will challenge
When a brand team walks into a QBR and says "we delivered 5x ROAS," the first question from anyone with a finance background is: compared to what?
If the baseline is wrong, the uplift is wrong. And if the uplift is wrong, the entire business case collapses.
Here's how it usually goes wrong:
- Pre/post comparison. You compare the campaign period to the period before. But seasonality, promotions, distribution changes, price shifts - any of these can move sales independently. Pre/post doesn't isolate the campaign. It isolates the calendar.
- Year-on-year comparison. Better, but still noisy. Was the competitive landscape the same? Was the product in the same number of stores? Was there a price change? YoY gives you a reference, not a baseline.
- Category average as proxy. Sometimes useful for context, but the category includes your competitors' activity too. If the whole category grew 8% and you grew 10%, that2% gap might be meaningful - or it might be noise.
None of these are baselines. They're benchmarks. Benchmarks tell you how you performed relative to something. Baselines tell you what your campaign actually changed.
Demand Forecasting as a Retail Media Baseline
The most rigorous way to define a baseline is to predict what sales would have been without the campaign - and then compare actuals against that prediction.
This is where demand forecasting models come in.
A demand forecast takes historical sales data and accounts for all the factors that drive volume independently of media: seasonality curves, promotional calendars, price elasticity, distribution changes, weather patterns, holiday effects, and even day-of-week behavior at the store level. Brands and retailers can use advanced shopper insights and audience analytics tools to understand these patterns before campaigns even start. The model learns what "normal" looks like for a given product in a given store in a given week - with all those variables baked in.
The output is an expected sales curve. Not a flat line. Not a naive average. A dynamic prediction that moves with the business, the way the business actually moves.
When a campaign runs, you compare actual sales against the forecast. The gap-actual minus expected-is your uplift. And because the forecast already absorbed seasonality, promotions, and distribution, that gap is much closer to the true incremental effect of the media.
This is fundamentally different from picking a comparison period and hoping nothing else changed. The forecast models the changes. It doesn't ignore them.
Why this matters for retail media specifically
Retail media sits on top of transaction data. That's its advantage. But transaction data alone doesn't give you a baseline-it gives you actuals. You need a model that turns historical actuals into forward-looking expectations. Advanced retail media platforms help retailers connect shopper data, campaign delivery, and sales attribution into one measurement framework.
The retailers and platforms that invest in demand forecasting as part of their measurement stack unlock something powerful: a baseline that's defensible, repeatable, and doesn't depend on finding a "clean" comparison period, because in retail, there's no such thing as a clean period. Something is always on the deal. Something always changed.
A good demand model handles that. It says: "Given everything we know about this product, these stores, this time of year, this pricing, and this promotional plan, here's what we expected to sell." The campaign's job is to beat that expectation.
And when the model is calibrated well, the results hold up under scrutiny. Finance can audit the methodology. The brand can see the assumptions. The conversation shifts from" Do you believe this number?" to "What drove the delta?"
Control groups: the other clean path
Demand forecasting isn't the only way. The other credible method is control groups.
Take a population of shoppers. Expose one group to the campaign. Keep a control group that looks statistically identical but sees no campaign activity. Compare outcomes.
The difference is your uplift. The control group's behavior is your baseline.
In practice, the strongest measurement frameworks use both. The forecast sets the expected baseline across the full campaign footprint. The control group validates it experimentally on a subset. When the two methods agree, confidence goes up. When they diverge, you learn something about your model-and that's valuable too.
What a good baseline accounts for
Whether you're using demand forecasting, control groups, or both, a credible baseline controls for:
- Seasonality. Is this a period where sales naturally rise or fall?
- Promotional activity. Was the product on deal during the campaign? Was it on deal in the baseline period?
- Distribution changes. Did the product gain or lose stores?
- Price movements. Did the shelf price change?
- Competitive activity. Did a direct competitor launch, delist, or promote during the same window?
- External factors. Weather, holidays, macroeconomic shifts-anything that moves volume at category level.
A demand model absorbs these as inputs. A control group neutralizes them by design. Either way, the baseline reflects reality, not a convenient simplification of it.
The practical reality
Most retail media networks today don't do this well. Many don't do it at all.
The standard post-campaign report shows impressions, reach, and maybe a sales chart. The baseline is implied, not stated. The uplift is calculated against a period that was chosen because it made the result look good.
That's not measurement. That's marketing.
The networks that win long-term will be the ones that invest in demand forecasting as infrastructure - not as a one-off analysis, but as a continuous model that runs acrosse very campaign, every category, every store. They'll state their baseline methodology upfront, keep it consistent, and let the number be whatever it is - even when it's not flattering.
Because here's what happens when you get baselines right: brands trust the results. When brands trust the results, they increase budgets. When they increase budgets, they stop asking for discounts-because they're buying proof, not placements.
Baseline is not a reporting detail. It's the foundation of every claim retail media makes.
The best baselines are built on demand forecasting models that predict what would have sold anyway, and validated with control group experiments that confirm the prediction. Together, they turn "we think the campaign worked" into "here's what the campaign changed, and here's why we're confident."
If you're building or buying retail media, the first question isn't "what's the ROAS?" It's"how was the baseline defined?"
Everything else follows from there.



