Measurable ≠ meaningful: what counts vs what’s counted
Marketing got addicted to the wrong kind of metrics and in the race to quantify impact, teams over-optimized for what could be measured, not what actually moved the needle.
The dichotomy of brand marketing vs performance marketing has been written about ad nauseum. This post seeks to build upon that by asking a question: What would a paradigm shift look like if we were to rethink how we connect brand and experience activities to commercial impact? From an integrated brand and CX perspective, what most teams are doing is fraught with complexity and misalignment between (and even within) functions. Caught between fluffy metrics, misleading KPIs and diminishing returns. Is there a better way?
The state of the industry
Nielsen research from last year highlights that 70% of marketing leaders are pulling budget from brand to performance. This year’s 2025 follow-up shows that trend hasn’t just continued but is likely tightening, with 54% of marketers now expecting to cut total ad spend altogether. Nielsen notes that marketers plan to lean more heavily on performance marketing and cheaper digital channels… putting brand investment at risk.

This is counter to the guidance from ‘godfather of effectiveness’ Les Binet and his co-author Peter Field’s work, which highlights the compounding nature of brand and performance as two distinct roles (a rule of thumb mix of 62% brand and 38% performance).
This gels with a recent study carried out by Tracksuit & TikTok which discovered that brands who invest in awareness have much higher levels of conversion than those who don’t (2.86x to be precise). As well as WARC’s report that revenue ROI increased by a median of 90% when switching from a performance-only to a performance x brand mix.

A broader lens on measurement
Why the ongoing shift of investment into performance? Usually, it’s about chasing the metrics that are cheapest to instrument (with demonstrable short term uplift) while ignoring the ones that matter for future cash flow. But by expanding our data sets and choosing the right analytical lens, we can move from counting what is easy to counting what counts. This is not about rejecting measurement but rather expanding it. Instead of chasing perfect attribution, brands need to identify high-signal proxies that correlate with long-term impact.
Brand (and Experience to a certain extent) suffers from an intangibility complex. It’s long-term, non-linear, and let’s admit it… a bit fluffy. According to Gartner, 57% of brand leaders track brand health, but only 21% believe it leads to insights that are actionable by the organization. In a world where marketing budgets are shrinking (a 30% decrease over 5 years), marketing leaders are being forced to do more with less.

Coupled with “digital ROI” being one of the top three key strategic hurdles faced by CMOs (driven by ROAS and last click attribution), the picture of ‘why’ behind performance marketing begins to take shape.
Digital attribution became the metric of choice because it was precise. Yet over time, confidence has been shaken.
Famously, Adidas stumbled across an ill-placed reliance when their systems turned off performance media in LATAM and found there was no impact.
WARC’s report found that when brand activity was turned off, performance marketing ROI suffered a 20-50% decrease.
Uber turned off its $100m performance budget and saw no difference to rider acquisition (claiming attribution fraud on a massive scale)
Is there a way to evolve how we connect what we do on a brand and experience level to outcomes in a different way? Looking at what marketers are tracking, we can break them down by level of causal confidence and the latency at which the data is received.

To connect brand and experience activity to commercial impact there are broadly three methodological approaches marketers can use:
Attribution & direct connection
This includes deterministic tracking like conversion APIs, click paths, or user-ID joins. It’s the most direct way to link an action to a result, typically used in digital performance marketing. But it's often blind to upper-funnel and offline effects, and increasingly limited by privacy constraints and channel fragmentation. As WARC and Analytic Partners highlight, these models can overstate short-term impact by as much as 190% and miss the true drivers of growth. Attribution is no longer sufficient as a standalone decision tool. It needs to be supplemented with broader, more holistic approaches to avoid optimization traps.

Incrementality testing
Methods like geo holdouts or on/off media experiments help isolate the additional impact of a campaign or channel versus a baseline. They provide strong causal confidence and are commonly used in media mix validation, but they can be expensive, time-bound and hard to scale.
Correlation & proxy modeling
This is about identifying patterns over time or across markets using techniques like time-lagged correlation, regression modeling or composite indices. While not causal on their own, these methods are fast, flexible, and powerful when used with discipline. They’re especially useful in brand and experience work where clean test environments are hard to come by.
The latter of these is what I want to explore. It allows us to widen the aperture of what we track and how we can demonstrate impact with a level of confidence and reliability.
Proxies
The right proxies can expose the relationships between brand or experience activities and business results. This can be done in multiple ways:
Time-lagged correlation
e.g. editorial coverage or search interest spikes (3–6 month lag → sales lift)
Geo-matched markets
Compare investment variance in brand across similar regions
Regression models
Introduce brand inputs as variables in MMM to capture interaction effects
This approach doesn’t pretend to deliver causal insights. But it can expose patterns that matter and that (importantly) give confidence to leadership teams when determining investment decisions. There is no one-size-fits-all solution for brands. This approach should be about discovering the highly correlated variables that are relevant to your business.
To do that well, you’ll want to be thoughtful about which signals to include. The metrics worth pulling into correlation analysis tend to share three key qualities:
Granularity: so you can observe meaningful movement over time
Scale: to ensure the patterns you find are statistically robust; and
Signal quality: meaning they capture not just behavior but emotional or qualitative cues like excitement, resonance, or frustration
In the wild
Some examples that show this in action:
Reviews:
A nationwide Google Maps study found that nudging a restaurant’s rating from the low-3s into the low-4s increases its predicted five-year survival odds by roughly 12 percentage points, making star-rating the most influential variable in the model.
Another traffic impact study found that every additional half-star on Google boosts full-service traffic by roughly one-quarter versus local benchmarks.
Harvard Business School research on Seattle restaurants showed that a one-star increase in average Yelp rating lifted revenue 5-9 % for independents, with the effect realized over the subsequent two to three quarters.
CX:
A study on sportswear companies concluded that quarter-to-quarter improvements in brand-health NPS measured across all potential customers (not just current buyers) are a reliable leading indicator of next-quarter sales growth in the U.S. sportswear category, while static NPS levels or customer-only NPS do not predict revenue.
But NPS can be a blunt instrument and should be used with caution. CSAT allows for a more nuanced understanding that can be a strong predictor for brands. Even better, a more qualitative approach such as The Spikes Excitement Points can provide even greater insight and focus.
On the B2B front, Gainsight found a strong positive correlation (roughly 0.7 × per 1 ppt increase) between Net Revenue Retention (NRR) and Enterprise‑Value/Revenue multiples across SaaS firms, meaning that even a 1 ppt improvement in existing‑customer growth correlates with ~$700 M more enterprise value on a $1B revenue base. Though focused on SaaS, this example shows how strongly retention metrics like NRR can signal enterprise value. Consumer brands can look to mirror such an approach with loyalty or CLV-based proxies.
Intent:
Google search data has a very high correlation with Netflix's US subscriber growth.

Share of organic Search is a strong leading indicator of Share of Market across Automotive, Energy and Mobile Phone Handsets. This can be influenced by advertising investment and pricing.

Closing thoughts
The shift toward easily measured metrics created an illusion of certainty. Over time, this overconfidence led to decisions anchored in what was visible, not always what was valuable, favoring short-term attribution over long-term effect. But not everything that counts can be cleanly counted. By identifying strong correlations between brand or experience signals and commercial outcomes, it’s possible to make more confident decisions, even without perfect causality.
What is meaningful for one business will be different to another, so there’s no one-size-fits-all approach. Importantly, the use of proxies gives us guidance instead of prescribing solutions, ensuring that we augment our judgement, not replace it.