Beta

What Should You Avoid When Making an Ad Creative

What should you avoid when making an ad? Find out the 4 most common mistakes made when testing ad creative, so you can get more engagement quickly.
Pierce Porterfield

Testing ad creative is one of the most important determining factors of brand growth. Without it, you’re simply hoping for performance or optimizing blindly. 

And while it does require a bit of extra strategy and prep, those who are serious about increasing their ads’ effectiveness know it’s worth the time, effort, and money.

But only if you do it right. 

Without effective implementation of your tests, that time, effort, and money will be for naught. 

Here are four key mistakes to avoid when multivariate testing ad creative so that you get the most valid, impactful data possible.

Mistake #1: Starting without a hypothesis

Asking yourself, “What do I want to learn here?” is the very first step in kicking off a creative test. A well-thought-out hypothesis will help inform which assets — images, headlines, calls to action, etc. — should be tested in your ad creative.

With a hypothesis, for example, you might say, “I want to see if images of fruit or images of veggies perform better for my smoothie brand.” You’d choose a number of fruit images and veggie images, categorize them, run your test, and not only reveal which of the two categories perform better, but also which individual images in each category perform best.

Without a hypothesis, you lack rationale for which assets you choose to test leaving you unable to categorize the images you choose in a meaningful way. This causes your test to lack focus, meaning the data won’t be as powerful or meaningful as it could be. 

Let’s go back to the smoothie brand example. Without a hypothesis, you may test a fruit image, a veggie image, four smoothie images, a person holding a smoothie, and an image of an orange grove. Data collected on such wildly different assets result in shallower data, making it difficult to know why the image won, or what to test next. Sure, you’ll know which image was your top performer, but your learnings won’t be as deep as they could have been with a solid hypothesis in place.

The goal is to learn one or two things per creative test. Decide and document what you want to learn. Then choose and test the creative assets that will help you arrive at an answer. A strong hypothesis leads to a more controlled experiment which leads to clearer results.

Mistake #2: Overengineering your test based on assumptions

As creatives and marketers, we often think we inherently “know” what our audience wants to see in our ads. We arbitrarily choose which color combinations to use and which headlines must be paired with which images in our ads, all based on our own bias. 

Research shows we’re actually really bad at predicting winning ad creative. This is a particularly big flaw when it comes to multivariate testing because it starts to get in the way of the validity of creative data.

This typically shows up in the form of conflating variables. For example, let’s say someone on the team decides that only images of plants should be used when mentioning the “all-natural” value prop.

If those two creative elements are always only paired together, how will you know whether it was the image or the value prop that prompted the conversion? Answer: you won’t. 

This is a tough concept to understand without a visual aid. So here’s a quick, two-minute explainer.

There’s a reason modular design is considered the design approach for multivariate testing. It’s what allows each design element to be paired with all other design elements, controlling all your variables for an effective test. It also makes it very easy to reuse and retest elements over time and see if unexpected combinations of creative elements boost conversion.

Overengineering your test is also a huge waste of time. Those who attempt to make their test “perfect” before launch are not getting better results. It’s the marketers and creatives who admit they know nothing and decided to test everything that learn more, iterate faster, and get the greatest volume of intelligence.

The moral here? Before any multivariate test it’s best to throw all assumptions out the window, separate each and every creative element as its own stand-alone variable, and let your customers tell you what they prefer through their clicks and purchases.

Mistake #3: Optimizing your test too soon

It’s best practice to let your full multivariate test run its course before making any optimizations.

Read that again.

Mid-test adjustments result in uneven data in terms of spend, reach, and impressions across assets. If your goal is data collection, adjusting during a test will skew your insights. 

Instead, give your test its fair shake and allow the best creative assets to rise to the top. Testing is phase one. Optimizing is phase two.

Mistake #4: Overvaluing statistical significance

Statistical significance — or stat sig — mathematically validates that the correlation between two things is unlikely due to random chance. It’s a signal to marketers that their ads are indeed working because enough people in their audience clicked or converted.

One of the greatest determining factors of stat sig is the sample size of a test. Because multivariate testing involves testing a large number of ads at once, reaching statistical significance for each ad in a test can be difficult — especially if your KPI revolves around purchases.

The good news: not reaching stat sig does not render multivariate testing unreliable. Clear winners and losers emerge, even with smaller sample sizes per ad. 

It just means we often have to look at other indicators of success to help us make quick decisions about which ads and creative elements are or aren’t performing. 

One place to look is at your individual creative elements. Even if each ad had relatively low purchases, we can infer a strong signal if a specific element — headline, image, etc. — from this group of ads correlates with higher purchase rates. This is because each element is viewed more often per test than each individual ad. Quick example: if you test three images across nine ad variants, each image will be seen three times more than each ad.

Alternate KPIs is another indicator as reaching stat sig for clicks, post engagements, or add-to-carts, for example, can all be great proxies for success and reason to believe that an ad is worth scaling. Or, at the very least, running again in a smaller test to attempt stat sig for purchases.

Practice makes progress

Multivariate testing is complex. Mistakes will be made, even by those who make it a pillar of their brand’s growth strategy.

The key is to keep learning (the definitive guide to modern creative testing is a great place to start) and test everything. Make your pursuit of objective creative intelligence a relentless one and the results will show up on your bottom line.

Boost ad performance in days with a 7 day free trial.
Claim Trial

How to Run a Multivariate Test

The Beginner's Guide

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Tiffany Johnson Headshot

How to Run a Multivariate Test
The Beginner's Guide

Plus, Get our Weekly
Experimentation newsletter!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

We run DPA for the world’s biggest brands… and then we share the important things with you.
Join 10k other marketers and subscribe to the Catalog Cult - the world’s best newsletter about catalog ads.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Are you crazy...

about catalog ads? You’re not alone. Join over 8,000 other marketers in The Catalog Cult - the world’s best newsletter about catalog ads.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.