Knowing what we are doing. The case for growth focused creative pre-testing
“If we knew what we were doing it wouldn’t be called research.”
Creative pre-testing takes a fraction of the budget for an ad campaign to test whether it will positively effect sales before it is put out into the world. As a marketer it means you can have some reassurance you are investing in marketing, not just spending on it. Sounds simple and sensible doesn’t it?
Pre-testing became prominent in the 1960’s alongside TV ads and Stephen King of JWTs ADMAP article(1) that advanced thinking in pre-testing that endures to this day.
Since then, continuous revisions of the practice have endured and all-manner of methodologies for testing ads, including brain scanning and facial coding, mean its potential could be greater than ever.
But despite all this innovation, it is still divisive. After all, shouldn’t we, as experienced marketers be able to just look at an ad and ‘gut-feel’ it?
Prof. Kennedy of the Ehrenberg-Bass Institute for Marketing Science decided to test that assertion(2). Our intuition, it turns out, is not particularly reliable.
“We may as well have tossed a coin – relying on intuition is just gambling your company’s money.”
Professor Rachel Kennedy, Director, Product Development at the Ehrenberg-Bass Institute for Marketing Science
Over 700 marketers assessed two ads; one a proven winner, and the other a proven loser – the research showed that marketers could not, in fact, “gut feel” it at all. Proof if it were ever needed that accountability needs rigour, not just experience.
In 2007 Les Binet and Peter Field helped change the view of marketers who believed that marketing was a purely instinctive endeavour with their hugely influential paper “Marketing in the Era of Accountability”(3). It empirically identified the marketing practices that increase profitability; encouraging the marketing world to add more science to the ‘art’.
What then, was their advice on pre-testing? Surely if going on gut feel is the opposite of accountability, their paper should have recommended it strongly.
“Cases that reported favourable pre-testing results actually did significantly worse than those that did not.”
Binet and Field – Marketing in the Era of Accountability
Surprisingly, it didn’t – on every measure, where advertising was positively pre-tested, business effects were less pronounced; pre-testing seemed to somehow hamper the effectiveness of campaigns.
This was incendiary, and Binet and Field didn’t stop there. In their 2013 paper “The Long and Short of It"(4), new data re-enforced pre-tested campaigns as less likely to achieve long term business success. The evidence was stacking up.
So why wasn’t pre-testing working?
METHOD IN THE MADNESS
To be fair to Binet and Field, they always alluded to the fact that research wasn’t entirely worthless, but that the methodologies were flawed. Particularly in setting objectives, and in reference to the low-attention processing model of Robert Heath’s work(5) - how advertising works over the long term.
“If a pre-testing methodology is intended to predict a single response measure such as standout, and that measure is a poor predictor of business success, then the pre-testing methodology is likely to be flawed.”
Binet and Field – Marketing in the Era of Accountability
In “The Long and Short of it” they offered deeper insight; for short term campaigns of less than 6 months, pre-testing did in fact have a positive effect (+10%), but this effectiveness became negative (-10%) for campaigns lasting 2+ years.
Pre-testing seemed to work really well for short term campaigns, and poorly for campaigns focussed on long term effectiveness.
Alarmed by these findings, leading market research companies set out to fix things. As it turned out, persuasion-led creative pre-testing actually discriminated against ads that had more long-term effectiveness potential(7).
”These industry-standard measures (persuasion, brand linkage, and cut through) actually mislead when it comes to predicting the effectiveness of ads, discriminating against advertisements that generated greater numbers of (long-term) business effects in market.”
Orlando Wood – System(1)
Some ads simply didn’t stand a chance in traditional pre-testing, because it was testing if they could generate quick sales, not long-term effectiveness(7). We needed a new methodology for long-term growth.
Since then, the majority of research agencies have revolutionised their approach to pre-testing, allowing them to assess short term persuasiveness alongside long term recall and emotion to give better predictability from testing.
The IPSOS Equity Effect Index5, Kantar Millward Brown’s investment in neuroscience(8) and System1’s much touted star rating(9) for emotional response have all dramatically improved predictive measures for pre-testing in recent years.
But if we have learned anything from the chequered history of creative pre-testing innovation, it is that new methods don’t solve the problem alone. The most important element is ensuring we match the right kind of testing, to the objectives of the brief.
“Pre-testing works best when it's used to optimise an ad to maximise RoI, when it's used collaboratively by the client, the agency, and the research partner and when the research agency is fully briefed on the goals of the ad and can tailor the study outputs and debrief.”
Darren Poole – Millward Brown
To that end, we need solid, evidence-based briefs with clear business objectives - so we know what an ad is supposed to achieve, and that needs to be shared collaboratively between agency, client and the research agency to ensure the research agency can tailor the study outputs and de-brief.
As the old saying goes – rubbish in, rubbish out.
TESTING, TESTING, 1,2,3
Some advice for creative pre- testing then:
- Avoid just “gut feeling” an ad. It’s gambling with your business’s money. Testing gives us accountability for creative decisions.
- Creative pre-testing does support growth, but it needs the right methodology aligned to the objective of the campaign. Persuasion for short-term results, and emotion and recall for effective long-term growth.
- Tight, business objective-led briefs and strong collaboration ensures every partner is working to the same goal and that the right tests, outputs and debriefs are developed.
Einstein knew even he couldn’t predict outcomes. If you are serious about creativity for growth, it’s time for science to come to the art.
Ringo Moss is Managing Partner – Strategy, at McCann Demand.
- Stephen King: “Can research evaluate the creative content of advertising?” - ADMAP 1967.
- Rachel Kennedy: “Ehrenberg-Bass on the science of pre-testing campaigns” - WARC 2018
- Les Binet and Peter Field: “Marketing in the Era of Accoutnability” – WARC 2007
- Les Binet and Peter Field: “The Long and Short of It” – IPA 2013
- Robert Heath.: “50 years using the wrong model of TV advertising” - MRS 2007
- Adam Sheridan: “Selling Creative Research Short?” – IPSOS 2019
- Orlando Wood: “How Emotional Tugs Trump Rational Pushes” - System1 2012
- Dominic Twose: “How to measure emotional response in advertising” – Admap 2017
- Stephen Whiteside: “System1’s five tips to deliver more effective television ads” – WARC 2020
- Darren Poole: “How to use copy-testing or Pre-Testing” – Millward Brown / WARC 2016