A not-so-surprising fact: people who have a hard time remembering the past have a hard time predicting the future. Very few marketers — including CMOs, creative directors, and agency executives — remember what they learned from creative tests they ran six months ago. If they can’t remember what they learned, how can they use test results to predict creative performance?
Marketers don’t remember what they learned from creative tests for many reasons. First, test results are usually locked in incomplete feedback loops and organizational silos. Other than a few creative testing champions, stakeholders often don’t appreciate the goals of creative tests.
Are tests being run to generate incremental improvements, like reducing CPA by testing the presence vs. absence of models? Or are they being run to catalyze paradigm shifts, like transforming a brand's visual identity? This lack of shared vision limits cross-functional buy-in.
Relatedly, when marketers assume that the results of a creative test only apply to the ads being tested, they perpetuate outdated logic. They also discourage stakeholders from investing in more extensive testing.
Cross-functional buy-in is also limited when testing goals are misaligned with — and sometimes threatening to — other teams’ KPIs. Performance marketers, for instance, focus on CPA at the expense of focusing on brand salience, memorability, and reach. Territorial instincts between brand and performance teams, in this way, can constrain creative testing’s cross-functional value.
As a result, the outcomes of tests are rarely celebrated across teams, let alone used to lift the performance of channels other than the channels being tested.
Results may be used to learn which one of three Instagram ads win or lose, for example. But they’re rarely used to learn why ads win or lose. Was it because of the product image? copy? background color?
Results also aren’t applied across the full funnel. Brands are missing an opportunity to use learnings from creative tests to lift the performance of product launches, website redesigns, and offline media like audio and TV.
Without a continuously growing database of creative intelligence, marketers have to predict creative performance based on what they think will win.
Because we’re human, our predictions are only as good as the memories on which they’re based. When we don't remember what we learned from previous creative tests, our predictions are biased towards our preferences and towards our most available memories. Or towards the opinions of the highest paid people in the room.
We’re misled by groupthink and by recent, unrepresentative results. So, we predict incorrectly. In fact, marketers — ranging from newly minted performance marketers to tenured CMOs — accurately predict winning creative only 52% of the time.
Thankfully, solutions exist to help us predict and de-risk creative performance. These tools also streamline and automate the workflows entailed in designing, testing, and launching new ads. These solutions generate deep, structured creative intelligence, helping marketers quickly, confidently design new, improved creative.
Multivariate testing (MVT) is the fastest, most robust way to generate a rich database of creative intelligence. MVT empowers brands and agencies to serve audiences every possible combination of creative. This lets them prove — or disprove — hypotheses and quickly learn which creative variables resonate and which fall flat.
MVT also helps marketers challenge winning variables to deter creative fatigue, deepening and broadening insights while cutting CPA by 60%, in some cases. MVT’s ability to reduce CPA is especially valuable now that iOS 14 has caused broader audiences and weaker intent, making more compelling creative necessary to lift performance.
It’s time for a better crystal ball. If you’re interested in generating creative intelligence to predict and de-risk creative performance, reach out to our team to learn more.