Incrementality measures the true impact of your marketing by isolating what happened because of your activity versus what would have happened anyway. If 100 people bought your product and 60 of them would have bought it regardless, your incremental contribution is 40 sales. It is the difference between taking credit and deserving it. Most attribution models tell you who touched the ball last; incrementality tells you whether the ball needed touching at all.
Without incrementality measurement, you are almost certainly overspending on channels that look good in reports but contribute very little. A brand campaign claiming 500 conversions at last click might be cannibalising organic demand you would have captured for free. Getting this right means reallocating budget from vanity performance to genuine growth, which is the difference between a marketing function that looks busy and one that actually moves revenue.
The most rigorous approach is a controlled experiment: you split a comparable audience into two groups, show your ads to one and withhold them from the other, then measure the difference in outcomes. That gap is your incremental lift. Geo-based tests work well for this, running campaigns in some regions while pausing in matched control regions. You can also run platform-level conversion lift studies on Meta or Google, though those come with obvious conflicts of interest since the platform is grading its own homework.
The biggest mistake is never measuring incrementality at all and trusting platform-reported ROAS as gospel. Google and Meta will happily claim credit for conversions that were already going to happen. Another common error is running tests that are too short or with audiences too small to produce statistically significant results, then making budget decisions on noise. We also see teams test once, get a result, and never retest. Incrementality is not a fixed number; it shifts with seasonality, competition, and creative fatigue.
Straight answers to the questions we hear most often from marketers who suspect their reports are flattering them.
Attribution assigns credit for conversions across touchpoints. It tells you which channels were involved. Incrementality asks a harder question: would that conversion have happened if you had done nothing? Attribution describes the journey; incrementality measures whether the journey mattered.
Yes, though precision scales with spend. A simple geo-holdout test, pausing spend in one region while maintaining it in a matched region, costs nothing beyond the revenue risk of pausing. Even a two-week test on a single channel can reveal whether that channel is pulling its weight or just taking credit.
Brand search is the classic offender. If someone types your brand name into Google, they were probably going to buy anyway; paying for that click often adds cost without adding a customer. Retargeting also tends to overstate its value because it targets people who already showed purchase intent. Neither channel is worthless, but both are routinely overfunded.
At minimum, quarterly for your largest channels. Results change as your market, creative, and competitive environment shift. A channel that showed strong incremental lift six months ago might be delivering diminishing returns today. Treat incrementality testing as an ongoing discipline, not a one-off audit.
We build incrementality testing into measurement frameworks from the start, not as an afterthought. That means designing proper holdout experiments, setting statistical confidence thresholds before results come in, and using findings to reallocate budget toward channels that genuinely earn their keep. Our goal is to transfer this capability to your team so you can keep running these tests long after our engagement ends.