All posts
Strategy4 مايو 20267 min read

What is incrementality testing? A plain-English explanation (2026)

Incrementality answers the only question that matters: 'did my ad spend cause this revenue, or would it have happened anyway?' Here's how to actually run a test.

Eslam HamdyFloowzy, Founder
Editorial illustration of a holdout group vs treatment group test design.

Attribution answers 'which platforms got credit for this conversion'. Incrementality answers a different, harder question: 'would this conversion have happened anyway if we hadn't run the ads?' It's the only honest measurement of marketing ROI — and the one most teams skip because it's expensive to run.

The core idea (in 30 seconds)

Split your audience or geography into two groups: a treatment group that sees your ads and a holdout group that doesn't. After a defined test window, compare conversion rates between the two groups. The difference is incrementality — the conversions your ads actually caused, not the ones that would have happened organically anyway.

Three ways to run an incrementality test

  • Platform-native conversion lift — Meta, Google, Snap, X all offer built-in conversion-lift tests. They split your audience randomly into treatment + holdout, run for 2-4 weeks, then report the lift. Easy to launch; works only within one platform.
  • Geo holdout — pick statistically similar metros (Austin + Phoenix, say); show ads in one, hold out the other. Measure the difference in baseline-conversion-rate vs. test-conversion-rate. Works cross-channel; harder to set up.
  • User-level holdout (advanced) — if you have a CDP and customer IDs, you can randomly hold out 5-10% of your audience from all marketing for a quarter. The most rigorous test; usually only enterprise teams run this.

When to run incrementality

Three triggers:

  1. Before scaling a channel meaningfully. If you're about to 2x Meta spend, run a lift test first — confirm the next dollar is earning the same as the current dollar.
  2. When attribution numbers stop making sense. If Meta-reported ROAS is 4:1 but blended company ROAS is 1.8:1, incrementality testing tells you where the gap is.
  3. Quarterly as a discipline. Even when things are going well, run one incrementality test per quarter as the umpire that catches MTA and platform-reported bias before they compound.

The honest caveats

  • Statistical power matters. Tests below $50k spend on small audiences rarely give significant results. Budget for the test alongside the campaign.
  • Test duration matters. Most B2B and considered-purchase tests need 4-8 weeks for the post-purchase delay. Ecommerce can be 2-3 weeks.
  • Lift estimates are point estimates, not absolutes. A 12% lift with a 95% confidence interval of [4%, 20%] is much more honest than 'we lifted sales 12%.'

What to do with the result

If lift is positive and large: keep spending; the channel is incremental. If lift is positive but small: the channel is earning some incremental revenue but reported ROAS overstates it; adjust expectations. If lift is zero or negative: the channel is mostly cannibalizing organic; either restructure or cut. Don't act on a single test in isolation — directional patterns across 2-3 tests are more honest than any single point estimate.

The honest take

Most teams over-rely on platform-reported ROAS because incrementality tests are work. Running one per quarter — even a rough geo-holdout — produces the most honest answer about which channels deserve more spend. The teams that get this right compound for years; the teams that don't end up over-funding cannibalizing channels.

Written by

Eslam Hamdy · Floowzy, Founder

Founder of Floowzy. Spent the last decade building marketing analytics tools and running paid media across Meta, Google, TikTok, Snap, and X for mid-market and growth-stage teams.

Get the field notes in your inbox.

Floowzy joins your ad platforms read-only and surfaces what the algorithm is doing — anomalies, fatigue, marginal ROAS, cross-platform allocation. Free tier, 60-second setup.