AI Max, Performance Max, and Smart Bidding have taken over the inputs that PPC teams used to control — keywords, bids, match types, placements. That means most of the metrics your dashboard still shows are describing a game that no longer exists. Here is the four-layer measurement stack that actually tells you whether a 2026 paid account is working: profitability, incrementality, blended CAC, and first-party data quality.
The PPC Dashboard You Trust Was Designed for a Different Auction
The paid search dashboard most teams still look at every Monday morning was designed for an auction that no longer exists. It shows keyword-level CPCs, average position, reported ROAS, Quality Scores, and impression share — inputs the advertiser used to control and metrics that used to diagnose whether the account was working. In 2026 those metrics are describing a game where the advertiser no longer has the levers.
Performance Max distributes spend across Search, YouTube, Display, Gmail, and Discover with almost no advertiser transparency into which channel did the work. AI Max expands queries well beyond your keyword list into intent clusters the system decides are relevant. Smart Bidding sets bids in each individual auction based on signals the advertiser cannot see. And increasingly, ads are being served inside AI conversations where the concept of an "impression" and a "click" starts to stop meaning what it used to.
So the question is not whether to keep looking at the old dashboard. It is what you should be looking at instead.
Why Traditional PPC Metrics Quietly Stopped Working
The core problem is that every traditional PPC metric was built on the assumption that the advertiser set the inputs and the platform reported the outcomes. That contract has inverted. The platform now sets the inputs and the advertiser is left trying to measure outcomes that the platform is also grading.
Keyword-level CPC is not a useful unit of work anymore
Google's own documentation on AI Max makes clear that it will match your ads to queries that are thematically related to your keywords, not literally matching them. The "search terms insights" report now groups queries into categories rather than showing individual strings, and broad match with Smart Bidding effectively ignores the traditional keyword structure. Looking at CPC by exact keyword in that world is like looking at the price per word in a book. It is a number, but it is not the right unit.
Reported ROAS is platform-graded homework
Every ad platform reports the conversions it wants to claim. Google, Meta, TikTok, and Amazon will each claim influence over the same purchase because each one saw a touchpoint within its attribution window. If you sum platform-reported revenue for a business running five paid channels, you will often find it exceeds actual revenue by 30 to 80%. Reported ROAS is the number each platform needs you to see in order to justify more spend. It is not neutral.
Quality Score and average position measure the old workflow
Quality Score was useful when you could diagnose ad relevance, expected CTR, and Page Experience — see glossary" class="glossary-link">landing page experience at the keyword level and fix each one. With responsive search ads, dynamic search ads, and AI Max, the relationship between a specific keyword and a specific ad creative has become probabilistic rather than deterministic. Average position was deprecated in Google Ads back in 2019 for exactly this reason. The levers it described no longer exist in the same form.
Attribution windows are expanding whether you like it or not
Longer user journeys, AI-assisted research, and the collapse of third-party cookies have all widened the window between first touch and conversion. A conversion lag report in Google Ads will show you this directly: look at how many of your conversions arrive more than 7 days after the click, and compare it to the equivalent report from three years ago. For most SaaS and considered-purchase accounts, it has roughly doubled.
Want this implemented for your funnel?
Get a free 90-day growth plan with tracking, channel priorities, and next-week actions.
The Four-Layer Measurement Stack That Replaces It
The framework that actually works in 2026 stacks four layers on top of each other, each answering a question the one below it cannot. Read from the foundation up.
Layer 1: First-party conversion quality
This is the foundation and almost every other measurement problem traces back to it. Smart Bidding and Performance Max optimise against the conversion data you send them. If the data is incomplete or wrong, the AI will optimise toward the wrong outcome with remarkable consistency. The inputs that matter here are:
- Offline conversion imports — taking CRM-level pipeline and closed-won data and feeding it back to Google and Meta so the AI learns which leads actually convert to revenue, not just which leads submit forms
- Revenue values mapped by SKU or plan — so a £12 trial signup and a £2,400 enterprise deal are not weighted the same in the bid model
- New vs returning customer flags — so the algorithm does not credit itself for bringing back existing customers as if they were net-new
- Lead-stage imports — marketing-qualified, sales-qualified, opportunity, closed — so the algorithm can learn which top-of-funnel signals correlate with downstream revenue
- Server-side tagging — so consent, ad blockers, and ITP do not silently erode your conversion data before it reaches the platform
This is unglamorous work. It is also where the majority of 2026 paid-account leverage lives. A well-built first-party stack will outperform a larger budget with a broken stack more often than not.
Layer 2: Blended CAC
cac" title="Blended CAC — see glossary" class="glossary-link">Blended CAC is the total cost of acquiring a customer across every channel you spend money on — paid search, paid social, content, SEO tooling, affiliate, influencer — divided by the total number of new customers in the period. The reason it matters more than channel-reported CAC is that AI Overviews and zero-click SERPs have shifted a meaningful slice of what used to be organic clicks into paid search. Looking at paid CAC in isolation now double-counts demand that would have converted anyway via organic.
Blended CAC also reveals something channel-level CAC cannot: whether the acquisition machine as a whole is getting more efficient or less efficient over time. A paid team can show improving channel-level CAC while blended CAC is rising because content production slowed or organic rankings fell. Or the opposite — blended CAC improving while paid CAC looks flat, because SEO started pulling more weight. Either picture changes the budget conversation.
Layer 3: Incrementality
Incrementality is the answer to the single most important question a paid team can ask: would this conversion have happened without the ad? Because the platforms cannot answer that question honestly — they are the ones being tested — the measurement has to live outside the platform.
The three practical methods, in order of rigour:
- Geo holdout tests. Split your markets into matched treatment and control groups. Pause all paid activity in the controls for four weeks. Measure the gap. This is the gold standard for most B2C and B2B paid programmes and is achievable without platform cooperation.
- Brand search suppression tests. Pause your own brand bidding for a defined window and measure the decline in total brand conversions (paid plus organic). The typical finding is that 70-90% of paid brand conversions are recaptured organically. That does not mean brand bidding is always wrong — if a competitor is actively conquesting, the defence is worth paying for — but it reframes the conversation from "20x ROAS" to "how much of this is actually new".
- Platform-native conversion lift. Google now offers conversion lift tests with a minimum £5,000 spend. They are useful as a directional read, but the platform designing the test is also being tested by it, so treat the output as confirmatory rather than definitive.
A real test I ran recently returned a reported CAC of £65 and an incremental CAC of £210. Neither number was wrong — the platform was correctly reporting the conversions it observed. But the difference between them was the difference between "scale this 3x immediately" and "this is a maintenance channel". Most paid teams have never run this test on their largest account. That is the single highest-leverage diagnostic available to anyone reading this.
Layer 4: Profitability
The top of the stack is the question everything else is in service of: are we making money? ROAS answers a narrower question — how much revenue per pound of spend — and it leaves out everything that eats the margin between revenue and profit. Contribution margin is the right metric, and it means calculating, per channel where possible:
- Revenue attributed to the channel
- Minus product cost of goods sold
- Minus fulfilment, shipping, and returns
- Minus payment processing
- Minus the channel's own paid spend
For lead generation businesses the equivalent is qualified lead rate, sales acceptance rate, and close rate by campaign — because a campaign generating 300 form fills at £20 each is not the same as a campaign generating 100 form fills at £60 each if the second cohort converts to revenue 4x more often.
I have seen accounts with a reported 6x ROAS that were losing money because 72% of the orders were discount-code purchases on a 15% gross-margin product, once returns and shipping were layered in. The AI was doing exactly what it was told to do — maximising revenue per pound — and the business was quietly going broke.
The Hidden Channel Mix Inside Performance Max
Performance Max is the most opaque thing in Google Ads and it is also where the most spend is increasingly concentrated. The single campaign line in the dashboard is actually four different auctions stacked together, each with a very different incrementality profile.
The channel-level breakdown is available — you have to request it through the "Asset group report" and the "Insights" tab, and for shopping-heavy accounts through the Google Ads API — but it is almost never shown by default. If you are running Performance Max and you cannot show the channel-level mix, you are not actually managing the campaign. You are watching a single averaged number that hides four different efficiency profiles.
Practical steps:
- Pull channel-level spend and conversion breakdowns monthly
- Add account-level negative brand keywords so PMax cannot cannibalise your brand search campaigns
- Feed strong audience signals based on your best-customer profile rather than leaving the algorithm to explore broadly
- Treat PMax as a conversion channel, not a prospecting channel, until incrementality testing proves otherwise
Fixing Attribution Windows to Match Reality
Most accounts still run 30-day attribution windows because that was the default when the window was set up three years ago. Current user journeys often run 60 to 90 days for considered purchases and SaaS, and the right window depends on your product rather than the platform default.
Pull a conversion lag report in Google Ads (it sits under the tools menu, inside "Measurement") and look at the distribution of days-to-conversion on your top campaigns. If a meaningful share of conversions are arriving 14+ days after click, your attribution window is likely cutting off real credit. Extending it to 60 or 90 days will not inflate fake conversions — the platform only counts users it actually observed — but it will reallocate credit more accurately, which matters because credit is what the Smart Bidding model learns from.
What to Report to Leadership (and What to Stop Reporting)
The board deck I send clients every quarter has three sections, in this order:
1. Business outcomes
- Revenue growth, quarter-on-quarter and year-on-year
- Contribution margin and gross margin trend
- Blended CAC and CAC payback period
- ltv-cac-ratio" title="LTV:CAC Ratio — see glossary" class="glossary-link">LTV:CAC ratio for the cohort we can measure
2. Ecosystem contribution
- Blended channel mix — how much of new-customer volume came from paid search, paid social, organic, direct, referral
- Incrementality test results for the quarter — what we proved, what we disproved
- Efficiency trend over time — not just this month versus last month, but a 12-month rolling view
3. Strategic learning
- The hypothesis being tested this quarter, in one sentence
- What the test is designed to prove or disprove
- How much it costs and what decision the result will unlock
What stops appearing in leadership reports: keyword-level CPCs, average position, Quality Score trends, manual bid adjustments, impression share by campaign. Those are tactical operating metrics the AI now largely owns, and reporting them upward signals that the team is still fighting the 2018 battle.
The Gaps That Are Still Unresolved
Nobody has fully solved two problems yet, and honest measurement work acknowledges them.
Direct offers and discounts. The relationship between discount frequency, discount depth, and true incrementality is almost never measured properly. Most businesses cannot tell you whether a 15% off campaign brought in new customers or just gave a discount to customers who would have bought anyway. A holdout matched by offer type, not just geography, is the only clean way to test this, and very few teams do it.
Agentic commerce. When users are interacting with a shopping agent inside ChatGPT or Claude or Perplexity, the concept of "impression" and "click" breaks down entirely. The agent reads product data, negotiates attributes, and recommends — sometimes with a sponsored signal, sometimes not. Current attribution models are not designed for this and the platforms have not landed on a standard yet. The right response in 2026 is to accept that this surface exists, invest in being machine-readable (structured data, clear product specs, verifiable reviews), and treat it as a channel whose measurement will evolve.
The Short Version for Anyone Skimming
The old PPC dashboard measured inputs you controlled. You do not control those inputs anymore. The new measurement stack has four layers — first-party conversion quality as the foundation, blended CAC as the channel-agnostic check, incrementality as the reality filter, and profitability as the business outcome. Everything else is noise the AI now handles on your behalf.
If you do one thing after reading this, it is the quarterly geo holdout. The gap between reported CAC and incremental CAC is the single biggest correction available to a paid team in 2026, and it costs nothing but four weeks of discipline.
If you want help designing this stack for a specific account — SaaS, fintech, ecommerce, or crypto — that is exactly what my consulting work focuses on. The measurement layer is the one that pays for itself inside a quarter in almost every account I touch.
Get weekly growth ideas in your inbox
Practical SEO, PPC, analytics, and CRO notes with zero spam.
Frequently Asked Questions
Why are traditional PPC metrics like ROAS and keyword CPC less useful in 2026?▾
What is the difference between reported ROAS and incremental ROAS?▾
How do you run a geo holdout incrementality test?▾
What is blended CAC and why does it matter more than channel CAC?▾
What is first-party conversion quality and why is it the foundation of the stack?▾
What should you report to leadership about PPC performance in 2026?▾
Digital marketing consultant — SEO, PPC, analytics & CRO.
