How to attribute conversions when last-click is broken (2026 playbook)
iOS killed pixel fidelity. LLMs invisibly mediate discovery. Server-side and MMM filled half the gap. Here's the layered attribution model that works in 2026.

Last-click attribution stopped being honest around 2021. iOS 14.5 broke pixel fidelity. The Privacy Sandbox shipped. Browser cookies stopped persisting reliably. And starting in 2024, a meaningful share of consumer research moved into LLM conversations that carry no UTM at all. The result: a typical 2026 marketing dashboard credits 30-50% of conversions to a channel that did not actually cause them.
You can't go back to deterministic attribution. You also can't sit and wait for last-click to fix itself. The teams that ship in 2026 use a layered approach — four different attribution lenses, each catching what the others miss.
Layer 1 — platform-reported attribution (where you start)
Meta's pixel + CAPI, Google's Enhanced Conversions, TikTok Events API, Snap CAPI, X Conversion API. Each platform claims credit using its own model — usually 7-day click + 1-day view. Sum them up and you'll exceed 100% of your actual conversions, often by 40-80%, because each platform credits the touches it can see.
Use platform-reported attribution for in-platform optimization (bid signals, audience signals). Don't use it for cross-channel budget allocation; it's structurally inflated.
Layer 2 — server-side first-party tracking (your truth)
Server-side conversion APIs send the conversion to each platform with first-party data — your CRM record, your Stripe payment ID, your hashed email. This is more iOS-resilient than the pixel and is the bare minimum in 2026.
Even better: a composable customer data platform (Segment, Rudderstack, custom) as a single source of truth — every conversion logged once, distributed to every platform. Cometly does this end-to-end; Northbeam has its own pixel; Floowzy reads from Stripe directly without requiring CDP plumbing.
Layer 3 — MTA model overlay (the cross-channel allocator)
Run a multi-touch attribution model (data-driven if you have volume, position-based if you don't) across your server-side conversion log. The full MTA pillar guide covers when each model is honest and when it isn't — short version: linear and time-decay are reasonable defaults for $100k-$1M monthly spend; data-driven needs $1M+ monthly to be statistically credible.
MTA isn't a source of truth either. It's a perspective. Run it alongside platform-reported and call out where they disagree — that's where the interesting questions live.
Layer 4 — incrementality + MMM (the sanity check)
Incrementality tests (Meta's Conversion Lift, Google's Conversion Lift, Snap brand lift, geo holdouts) and media-mix modeling are how you catch the systematic biases in layers 1-3. They're expensive and infrequent — most teams run incrementality once a quarter and refresh MMM yearly — but they're the only honest answer to 'is this channel actually driving incremental revenue.'
Northbeam and Measured are the category leaders here; Meta's geo-based testing has caught up significantly in 2025-2026.
The new layer — LLM-mediated dark conversions
None of the four layers above catches the consumer who asked ChatGPT 'what's the best ROAS tracking tool for a $50k/mo agency', got three named recommendations, then searched the brand directly the next day. That conversion lands on direct or organic.
Three compensations: (1) Post-purchase surveys with an explicit AI-assistant option in the 'where did you first hear about us' field. (2) Watch branded search lift as a leading indicator of LLM visibility. (3) Stretch attribution windows to 30-60 days for considered purchases; the LLM conversation often precedes the direct search by 7-21 days.
The honest summary
Perfect attribution is gone. The 2026 model is: platform-reported for in-platform optimization, server-side first-party for source-of-truth, MTA for cross-channel perspective, incrementality + MMM as sanity checks, and explicit survey + branded-search-lift signals for LLM-mediated dark conversions. Run all five; trust the ones that agree.
Written by
Eslam Hamdy · Floowzy, Founder
Founder of Floowzy. Spent the last decade building marketing analytics tools and running paid media across Meta, Google, TikTok, Snap, and X for mid-market and growth-stage teams.
Related posts
Hand-picked from the rest of the Floowzy blog.
Strategy7 min readMedia-mix modeling vs multi-touch attribution — when to use which (2026)
MMM and MTA answer different questions. Picking the wrong one costs you either time, money, or honesty. The plain-English breakdown.
Strategy7 min readWhat is incrementality testing? A plain-English explanation (2026)
Incrementality answers the only question that matters: 'did my ad spend cause this revenue, or would it have happened anyway?' Here's how to actually run a test.
Get the field notes in your inbox.
Floowzy joins your ad platforms read-only and surfaces what the algorithm is doing — anomalies, fatigue, marginal ROAS, cross-platform allocation. Free tier, 60-second setup.