EFFECT versus EFFECTIVENESS
Effect doesn’t care how elegant the model is. It cares whether something actually moved.
There’s a heresy spreading through the advertising chattering/grumbling classes.
It’s not big enough to call it a full-scale revolt just yet, but deffo noticeable if you’re paying attention, it’s a mini backlash on the term ‘effectiveness’.
For years, effectiveness has been the industry’s new badge of seriousness. They built the frameworks, econometric models, attribution systems, and a few Binet/Field charts that seemed to be compulsory in every strategy deck. All with the very reasonable goal of proving that what we do works. What’s not to like? A lot, it appears.
The spirit of that work has totally been valuable. It forced a bit of discipline and it created a shared language (60-40 FTW!). It made marketing a bit more palatable to the CFO, I suppose.
But somewhere along the way, it has jumped the shark a bit. ‘Effectiveness’ stopped being about UNDERSTANDING what works, and started becoming about PROVING that something worked.
And those are not really the same thing because once you move into the business of proof, there’s the real danger that you start optimising for what CAN be measured. You lean too hard on proxies (hello there, attention metrics), you discount uncertainty and then retrofit narratives onto results. You create outputs that look precise enough to reassure bean counters and give hard-of-thinking planners powerpoint material, and it starts to feel like science - but it’s much closer to theatre than science.
Not deliberately deceptive, but constructed and selective. Dependent on assumptions that rarely survive outside the model that produced them. Now, I’m all for anything that says please can we do more creative advertising. If anything nudges a few more brands away from wallpaper and towards actual ideas, I’m not mad at it.
But, ever since The Long and The Short of It, I had a nagging doubt that Effies are, in many ways, selection bias by design. C’mon we’ve all known for well over a decade since Binet & Field started leaning in on IPA/Effies data that using award case studies come with that HUGE selection bias baked in, surely that’s not new news.
They are self-selected, curated, and judged, a highlight reel of campaigns that ‘worked’ and were then written up beautifully. What they show you is not what advertising does in general, but what success looks like after it has been explained nicely. The problem is obvious, you’re only studying the winners. There’s no graveyard of campaigns that followed the same principles and failed, no visibility of how often those same approaches don’t work. Which means patterns start to look more deterministic than they really are, and messy reality gets tarted up into a clean story of insight, idea, execution, result. Effies don’t prove what works. They show what some planners think ‘might have worked’ once and then got post-rationalised into a compelling narrative. They are examples of ‘success’, not evidence of causation. In that sense, they are a perfect symbol of the industry’s obsession with effectiveness.
How do I know this? Well, I’ve written enough of them.
Optimised for demonstration, not for repeatable effect.
Which is where the distinction between effectiveness and effect starts to matter.
Effectiveness, as it’s commonly used, is retrospective. It looks back and asks, ‘can we demonstrate that this worked?’
Effect is forward-facing. It asks will this change anything in the real world?
Effect doesn’t care how elegant the model is.
It cares whether something actually moves.
And it is ok with accepting something that the effectiveness lobby often seems to sweep under the carpet, that advertising operates in probabilities, not certainties.
You are not pulling levers in a controlled system, you are sending signals out into a messy, distracted, socially influenced environment and hoping they stick. It’s not engineering.
Sometimes they do. Sometimes they don’t. Often, they do in ways you can’t fully trace.
Which means the job is not to eliminate uncertainty.
It’s to work with it, and most importantly, reconnect everything back to human behaviour.
If something ‘worked’, there should be a plausible explanation rooted in how people actually process the world. Did it stand out? Did it trigger emotion? Did it connect to a memory structure that could be retrieved later?
Without that, attempts at quantifying effectiveness risk becoming (even more) circular.
It worked because the numbers say it worked and the numbers say it worked because they were designed to.
I hope the nascent backlash against effectiveness studies isn’t a rejection of evidence. What it should be is a rejection of overconfidence. A pushback against the idea that we can fully quantify a system that is, by its nature, only partially observable.
And that’s a healthy correction, IMO because the alternative isn’t even more precision guesswork. It’s judgement, informed by…
Empirical patterns (what tends to work)
Behavioural science (how people actually behave)
And real-world observation (what is actually happening in market)
Is it time to retire ‘effectiveness’ as the ultimate goal? Could be.
Not because it’s wrong but because it’s been stretched beyond what it can reasonably support. We could replace it with something simpler. More honest and harder to fake.
Effect.
Will it get noticed?
Will it get remembered?
Will it do something to move behaviour out in the world?
If the answer is yes (probably), then something is working, whether or not some spurious decimal point moving trickery can prove it neatly.
Because in the end, markets don’t respond to models, they respond to signals.
And signals either have an effect. Or they don’t.
