Last updated: 2026-04-04
TL;DR
- No single attribution model is universally correct; the right one depends on your sales cycle and channel mix.
- Last-click attribution undervalues awareness channels; first-click undervalues closing channels.
- Data-driven attribution in GA4 is the default as of 2026 and works well above ~1,000 monthly conversions.
- Shorter sales cycles suit simpler models; longer B2B cycles demand multi-touch or data-driven approaches.
- Combine attribution with incrementality testing for the most accurate picture of marketing performance.
What Is Attribution Modelling and Why Does It Matter?
Attribution modelling is the method you use to assign credit for a conversion (a sale, a lead, a signup) to the marketing touchpoints a customer interacted with before converting. If someone clicked a Google ad on Monday, read a blog post on Wednesday, and converted through an email on Friday, attribution modelling decides how much credit each of those three touchpoints receives.
The model you choose directly determines which channels appear profitable and which appear wasteful. That is not a small thing. It shapes budget allocation, team priorities, and strategic direction. Pick the wrong model and you will systematically overfund one channel while starving another that quietly drives the majority of your revenue.

We have seen this pattern repeatedly across 250+ clients: a business running last-click attribution concludes that branded PPC is their best-performing channel, cuts spend on upper-funnel display and social, and then watches overall conversions decline over the following quarter. The branded PPC was catching demand that display and social created. Last-click hid that relationship entirely.
Attribution is not an academic exercise. It is a budgeting tool. And if the tool is miscalibrated, the budget is wrong.
The Seven Common Attribution Models
There are seven models you will encounter in most analytics platforms. The first six are rules-based, and the seventh (data-driven) has become the default in GA4 as of 2024. Each distributes credit differently across the customer journey.
1. Last-Click Attribution
100% of credit goes to the final touchpoint before conversion. If someone clicked five ads over two weeks but converted after clicking a retargeting ad, that retargeting ad gets all the credit.
Where it shines: Businesses with a single dominant conversion channel and very short purchase cycles (under 24 hours). Direct-to-consumer impulse purchases. Simple e-commerce with limited upper-funnel activity.
Where it fails: Any business running awareness campaigns. Last-click is structurally incapable of valuing anything that is not the final interaction.
2. First-Click Attribution
The mirror image: 100% of credit goes to the first touchpoint. The channel that introduced the customer gets everything; the channel that closed the deal gets nothing.
Where it shines: Businesses whose primary challenge is acquisition, not conversion. If you are trying to evaluate which channels bring net-new audiences into the funnel, first-click isolates that signal.
Where it fails: It completely ignores everything that happens between discovery and conversion. For long sales cycles with multiple nurture touchpoints, this is dangerously incomplete.
3. Linear Attribution
Credit is distributed equally across every touchpoint. Five touchpoints? Each gets 20%.
Where it shines: Businesses where every stage of the funnel matters roughly equally and you want a balanced view without favouring any single channel. It is the “fairest” model in a naive sense.
Where it fails: Fairness is not the same as accuracy. Linear attribution treats a passing impression the same as a high-intent product page visit. It dilutes the signal from your strongest channels.
4. Time-Decay Attribution
More credit goes to touchpoints closer to the conversion, with earlier touchpoints receiving progressively less. Google’s implementation uses a 7-day half-life: a touchpoint 7 days before conversion gets half the credit of one on conversion day.
Where it shines: Short promotional cycles and time-sensitive campaigns. If you are running a 2-week product launch, time-decay appropriately weights the touches that happened during the decision-making window rather than a casual blog visit from 6 weeks ago.
Where it fails: B2B sales cycles of 3+ months. In long cycles, the initial research phase (which often determines the consideration set) is so far from conversion that time-decay essentially zeroes it out.
5. Position-Based (U-Shaped) Attribution
40% of credit to the first touchpoint, 40% to the last, and the remaining 20% split equally among everything in the middle. Some platforms allow you to adjust these percentages.
Where it shines: Businesses that value both acquisition and conversion, which is most businesses. Position-based is the strongest “simple” model for multi-channel marketing. It acknowledges that the introduction and the close are critical while still giving some credit to nurture touches.
Where it fails: It arbitrarily assigns 40/40/20 rather than measuring actual impact. If your middle-funnel content is the real differentiator (common in B2B), position-based undervalues it by design.
6. W-Shaped Attribution
An extension of position-based. 30% to the first touch, 30% to the lead creation touch, 30% to the opportunity creation touch, and 10% distributed across everything else. This requires a CRM with clearly defined lifecycle stages.
Where it shines: B2B companies with defined pipeline stages in a CRM like HubSpot or Salesforce. It captures the three most important transitions in a B2B buying cycle: awareness, lead, and opportunity.
Where it fails: You need clean CRM data and clearly defined stage transitions. If your lead-to-opportunity process is messy, the model produces garbage. It also requires enough conversion volume at each stage to be statistically meaningful.
7. Data-Driven Attribution (DDA)
Google’s machine learning model in GA4 analyses your actual conversion paths to determine how much credit each touchpoint deserves based on its incremental contribution. It uses a Shapley value approach, borrowed from cooperative game theory, to calculate each channel’s marginal impact (Google Analytics Help, 2024).
Where it shines: Businesses with sufficient data volume. Google requires a minimum threshold (around 400 conversions per conversion type over 28 days for the model to train properly). If you meet that threshold, DDA will outperform any rules-based model because it uses your data, not assumptions.
Where it fails: Low-traffic sites. If you get 50 conversions a month, DDA does not have enough signal to model reliably. It also only considers touchpoints Google can track, which means offline, dark social, and privacy-blocked interactions are missing from the model.
Attribution Model Comparison Table
The table below summarises each model’s credit distribution, minimum data requirements, and ideal use case. Keep it as a reference when evaluating which model fits your current setup.
| Model | Credit Distribution | Best For | Minimum Data Needed | Key Weakness |
|---|---|---|---|---|
| Last-Click | 100% to final touchpoint | Short sales cycles, single channel | Low | Ignores all upper-funnel activity |
| First-Click | 100% to first touchpoint | Evaluating acquisition channels | Low | Ignores nurture and closing channels |
| Linear | Equal across all touchpoints | Balanced multi-channel view | Low | Treats all touches as equally important |
| Time-Decay | More to recent touchpoints | Short promo cycles, time-sensitive offers | Low | Undervalues long-cycle awareness |
| Position-Based | 40% first / 40% last / 20% middle | Multi-channel with clear first and last touch | Medium | Arbitrary percentage split |
| W-Shaped | 30/30/30/10 across lifecycle stages | B2B with CRM pipeline stages | Medium-High | Requires clean CRM data |
| Data-Driven | Algorithmic, based on actual paths | High-traffic multi-channel businesses | 400+ conversions per 28 days | Black box; limited to tracked touchpoints |
One important note: these models are not mutually exclusive in your analysis. You can (and should) compare results across models to identify channels where the models disagree. That disagreement is where the real insight lives.
How to Pick the Right Attribution Model for Your Business
Choosing an attribution model is not a theoretical exercise. It is a practical decision based on three measurable factors: your sales cycle length, your channel complexity, and your data volume.
Factor 1: Sales Cycle Length
If your average time from first touch to conversion is under 7 days, simpler models work. Last-click is not ideal, but it is less wrong when the entire journey happens in a single session or over a few days. Time-decay also works well here because the recency bias matches the actual compressed timeline.
Sales cycles of 7 to 30 days are the sweet spot for position-based attribution. There is enough journey complexity to benefit from a multi-touch model, but not so much that you need machine learning.
For sales cycles over 30 days, especially the 3 to 12 month cycles common in B2B SaaS and enterprise, you need either W-shaped (if you have CRM integration) or data-driven attribution. Rules-based models cannot capture the complexity of a 6-month buying committee journey with 20+ touchpoints across 4 stakeholders.
Factor 2: Number of Active Channels
If you run 1 to 2 channels, attribution modelling is less critical. With only paid search and email, the question of credit distribution is simpler because there are fewer paths to model.
At 3 to 5 active channels, multi-touch models become important. This is where linear and position-based models add genuine value, because channel interaction effects start to matter.
Above 5 channels, data-driven attribution is the only model that can reliably account for the combinatorial complexity of user paths. A user who sees a YouTube ad, clicks an organic result, receives an email, clicks a retargeting display ad, and converts through a branded search has taken a path that rules-based models can only approximate.
Factor 3: Monthly Conversion Volume
This is the constraint most businesses underestimate. Data-driven attribution in GA4 requires substantial conversion volume to produce reliable output. Google’s documentation states a minimum of 400 conversions per conversion action over 28 days (Google Analytics Help, 2024). In practice, we have found that models become genuinely stable above 1,000 monthly conversions.
If you are below that threshold, you are better off using position-based attribution and supplementing with manual analysis. A poorly trained data-driven model is worse than a well-understood rules-based model.
Decision Framework
- Measure your average sales cycle length. Check your CRM or GA4 path length report.
- Count your active marketing channels (any channel with meaningful spend or effort).
- Check your monthly conversion volume in GA4 under Admin > Conversions.
- Match these three factors to the comparison table above.
- Run 2 to 3 models in parallel for 90 days and compare the results before committing budget decisions to any single model.
Why No Attribution Model Tells the Full Story
Every attribution model, including data-driven, has a fundamental limitation: it can only credit what it can observe. And the observable portion of the customer journey is shrinking, not growing.
Apple’s App Tracking Transparency framework, introduced in 2021, reduced the trackable conversion paths on iOS by an estimated 30 to 40% (Flurry Analytics, 2023). Google’s Privacy Sandbox, cookie deprecation timelines (repeatedly delayed but directionally certain), and increasing use of VPNs and ad blockers all reduce the data pool that attribution models rely on.
Then there is dark social. A recommendation in a private Slack channel, a WhatsApp message sharing your article, a mention on a podcast that the listener later Googles: none of this shows up in any attribution model. And yet for many B2B companies, these untrackable touchpoints influence 50% or more of the buying decision. A 2023 study by Refine Labs found that 83% of B2B buyers reported that “dark funnel” touchpoints influenced their purchase, but only 13% of those touchpoints were captured by attribution tools (Refine Labs, 2023).
Supplementing Attribution with Incrementality Testing
Incrementality testing (also called lift testing) uses controlled experiments to measure the causal impact of a channel. You run a channel for one group and suppress it for another, then measure the difference in conversions. Platforms like Meta and Google offer built-in conversion lift studies. For cross-channel incrementality, geo-based experiments (where you vary spend by geographic region) are the gold standard.
The combination of attribution modelling for day-to-day budget allocation and incrementality testing for quarterly strategic validation gives you the most complete picture. Attribution tells you where to steer; incrementality tells you whether the steering is actually working.
Media Mix Modelling for Larger Budgets
Businesses spending over £500,000 GBP annually across multiple channels should also consider Media Mix Modelling (MMM). Unlike attribution (which tracks individual user paths), MMM uses aggregate statistical analysis to model the relationship between marketing spend and business outcomes over time. Google’s open-source Meridian tool, released in 2024, has made MMM more accessible to mid-market businesses (Google Meridian Documentation, 2024).
MMM, attribution, and incrementality are not competing approaches. They are complementary lenses. The most sophisticated marketing teams we work with through our Fractional CMO service use all three, calibrated against each other.
Common Attribution Mistakes (and How to Avoid Them)
After 15+ years in digital marketing and working across hundreds of client accounts, we see the same attribution errors repeatedly. Here are the ones that cost the most money.
Mistake 1: Treating Attribution as Truth Rather Than a Model
An attribution model is a simplification of reality. It is useful in the same way a map is useful: it represents the territory, but it is not the territory. The moment you treat attribution numbers as absolute truth and make aggressive budget cuts based on a single model’s output, you are making decisions based on a simplified representation. Run multiple models. Compare them. Use the disagreements between models as signals for further investigation.
Mistake 2: Ignoring the Lookback Window
GA4’s default lookback window is 30 days for acquisition events and 90 days for other conversions. If your sales cycle is 120 days, GA4 will literally not see the first touchpoint. You need to adjust lookback windows to match your actual sales cycle. In GA4, this is configurable under Admin > Attribution Settings.
Mistake 3: Attributing Revenue to Branded Search Without Question
Branded search almost always looks like the highest-performing channel in last-click attribution. But branded search captures demand; it rarely creates it. Before celebrating your branded PPC ROAS, ask: what created the demand that branded search captured? The answer is usually the awareness channels you are thinking about cutting.
Mistake 4: Not Connecting Offline Conversions
If you generate leads online but close sales offline (via phone, in-person meetings, or demos), your attribution model only sees half the picture. Connecting offline conversions back to GA4 and your ad platforms is not optional for businesses with mixed online/offline sales processes. Google’s offline conversion imports and CRM integrations via tools like Zapier or native connectors make this technically straightforward, but many businesses skip it out of perceived complexity.
Mistake 5: Set and Forget
Your marketing mix changes. Your audience behaviour changes. Platform tracking capabilities change. An attribution setup that was appropriate 18 months ago may be entirely wrong today. We recommend a quarterly review of your attribution model configuration, lookback windows, and conversion definitions as part of your broader marketing strategy review.
A Practical Starting Point for Most Businesses
If you have read this far and feel unsure where to begin, here is a pragmatic starting configuration that works for the majority of SMBs and scaling startups.
- Use GA4’s data-driven attribution as your primary model. It is the default since Google retired last-click as the default in 2023, and for most businesses with reasonable traffic, it produces better results than any rules-based model.
- Set your lookback window to match your sales cycle. If your average time-to-conversion is 45 days, set the lookback window to at least 60 days. Do not accept the 30-day default without checking.
- Run a monthly comparison report. In GA4, go to Advertising > Model comparison and compare data-driven vs last-click vs first-click. The channels where these models disagree most are the ones worth investigating further.
- Add a “how did you hear about us” field to your conversion forms. This simple self-reported attribution question captures dark social and offline touchpoints that no model can track. It is low-tech and imperfect, but it fills a gap that sophisticated tools miss entirely.
- Plan one incrementality test per quarter. Pick the channel you are least certain about and run a geo-based or platform-native lift test. One test per quarter gives you four data points per year to calibrate your attribution model against reality.
This is not a perfect system. Perfect measurement does not exist in marketing, and anyone claiming otherwise is selling something. But this approach gives you a measurement framework that is directionally correct and improves over time. That is more than most businesses have.
If you want to go deeper on how AI-powered search is changing the attribution picture, our GEO glossary entry covers how generative search results create new touchpoints that traditional attribution misses entirely.
Frequently Asked Questions
What is the best attribution model for small businesses?
For small businesses with fewer than 500 monthly conversions, position-based (U-shaped) attribution provides the best balance of simplicity and accuracy. It credits both the channel that introduced the customer and the one that closed the deal. If you have enough conversion volume, GA4’s data-driven model is a better choice, but it requires sufficient data to train reliably.
Why did Google remove last-click attribution as the default in GA4?
Google switched GA4’s default to data-driven attribution in late 2023 because last-click systematically undervalues upper-funnel channels. In a multi-channel world, giving 100% credit to the final touchpoint produces misleading budget allocation decisions. Data-driven attribution uses machine learning to distribute credit based on actual observed impact across conversion paths.
How does attribution modelling work with iOS privacy changes?
Apple’s App Tracking Transparency (ATT) reduces the number of trackable user paths, which means all attribution models operate on incomplete data for iOS users. Businesses need to supplement digital attribution with methods like incrementality testing, Media Mix Modelling, and self-reported attribution (asking customers how they found you) to fill the gaps that privacy restrictions create.
What is the difference between attribution modelling and Media Mix Modelling?
Attribution modelling tracks individual user journeys and assigns credit to specific touchpoints. Media Mix Modelling (MMM) uses aggregate statistical analysis to model the relationship between total marketing spend and business outcomes over time. Attribution is better for tactical day-to-day decisions; MMM is better for strategic budget allocation across channels. The most effective measurement frameworks use both.
How often should I review my attribution model?
We recommend a quarterly review. Check whether your lookback window still matches your sales cycle, compare results across multiple models in GA4’s model comparison report, and verify that your conversion definitions are still accurate. Marketing channels and user behaviour change frequently enough that an annual review is insufficient.
Can attribution modelling track word-of-mouth or podcast mentions?
No. Traditional attribution models only track digital touchpoints they can observe (clicks, ad impressions, website visits). Word-of-mouth, podcast mentions, private messages, and other ‘dark social’ touchpoints are invisible to attribution tools. The best workaround is adding a self-reported attribution question (‘how did you hear about us?’) to your conversion forms.
What is incrementality testing and how does it relate to attribution?
Incrementality testing uses controlled experiments to measure the causal impact of a marketing channel, rather than just correlating touchpoints with conversions. You suppress a channel for a test group and compare conversion rates against a control group. Attribution tells you what touchpoints appeared in the journey; incrementality tells you which ones actually caused the conversion.
Is data-driven attribution in GA4 reliable for B2B businesses?
It depends on conversion volume. B2B businesses with long sales cycles often have lower conversion counts, which can make GA4’s data-driven model unreliable. If you have fewer than 400 conversions per 28 days for a given conversion action, the model may not train properly. B2B companies with lower volume should consider W-shaped attribution with CRM integration as an alternative.
Want to Build a Measurement Framework That Actually Works?
Attribution modelling is one piece of a larger measurement puzzle. We help marketing teams build the capability to measure, analyse, and act on their data independently. No dependency, no black boxes.


