Beta

Break Your Bad Ad Testing Habits

Learn how to recognize and avoid bad ad testing habits that could be weakening the efficacy of your multivariate results. Get insights from Marpipe today!
Jess Cook

“Mistakes were made,” is not something you want to hear when it comes to testing paid social. 

But, as with trying anything new, they’re bound to happen when moving away from A/B testing and adjusting to better, more granular methods, like multivariate.

But fear not, gentle ad nerds. On the latest episode of Resting Ad Face, Susan and I talk through the four most common faux pas we see new multivariate testers make, and share advice on how to avoid them for more meaningful creative data.

Here are some highlights:

Old A/B testing habits die hard

Marketers are familiar with A/B tests. They’re simple and they’ve been around, in various forms, for ages. So it’s very easy to carry over the practices made popular by that method of testing to multivariate testing, but — sad trombone — they can negatively affect the power and validity of your data.

Here are the two most comment habits carried over from A/B testing:

1. Starting your test without a hypothesis. With A/B testing, you don’t need really a hypothesis. You pit a handful of fully realized concepts against each either and an ad either wins, or it doesn’t.

But the foundation of multivariate testing is the individual assets within your ads. To know which assets to include in your test, you have to know what you want to learn. 

Let’s say, for example, you want to learn if images of women or men perform better. You now know you’ll need images of men and women to see that hypothesis through.

Here are a few more examples of solid hypotheses for multivariate testing:

  • I want to see which generates more conversions: images of my product being used by a model or images of the product by itself.
  • I want to see which discount drives CTR the most: BOGO, dollars off, or percentage off.
  • I want to see which background colors generate more leads.

Upon reading these, it’s clear which creative elements would be required to run each test — and learn what your customers gravitate toward in your ad creative.

2. Overengineering your test based on assumptions. As creatives and marketers, we often think we inherently “know” what our audience wants to see in our ads. We arbitrarily choose which color combinations to use and which headlines must be paired with which images in our ads, all based on our own bias. (Research shows we’re actually really bad at predicting winning ad creative, by the way.) 

In multivariate testing, this typically shows up in the form of conflating variables. For example, let’s say someone on the team decides that only images of plants should be used when mentioning the “all-natural” value prop. If those two creative elements are always only paired together, how will you know whether it was the image or the value prop that prompted the conversion? Answer: you won’t. 

Designing modularly is the key to breaking this habit. By separating every creative element within your ad, you can understand which elements — headlines, colors, images, etc. — are the reason people click or purchase.

Continuous testing leads to stacked, incremental gains

Creative testing has traditionally been used as a nice-to-have, sporadically implemented tool. We have a sale or a new product launch coming up, so we test the ad creative once and move on with our lives. 

But one of the most important upsides to multivariate testing is the ability to stack performance improvements over time through consistent testing. With each test, you can improve your ad’s ability to convert, little by little, (or sometimes, a lot by a lot). 

If your test results show clear positive outliers, you can keep probing subsets of those elements to find most performant variant. If your test results in no clear winners or losers, you can try something totally different and see if this new direction can help you identify any winners to probe further.

For more, subscribe to Resting Ad Face on YouTube or your favorite podcast platform.

Transcript

[00:00:05] Susan: Hey, everybody. Welcome back to another episode of Resting Ad Face. I am Susan Wenograd, the VP of Performance Marketing here at Marpipe and I am joined by my amazing content coworker, Jess Cook.

[00:00:18] Jess: Hey everyone.

[00:00:19] Susan: This week, we are going to be talking about mistakes.

[00:00:23] Jess: Ohhh.

[00:00:23] Susan: It's a juicy topic. But there's, I mean, there's plenty of mistakes you can make when you run media, so we're not gonna go through all those. But I think there's so many questions around multivariate testing and because marketers are so conditioned to doing things like an AB test, it's really easy to carry over some of those habits and actually ruin your, your multivariate test or have it just not produce anything that's learnable from it.

[00:00:54] So there's, we identified really four things, I think, that we see most often [00:01:00] become a problem. And these are certainly things that I was guilty of when I started or had to unlearn certain habits. So let's take today to dive into those and Jess, I'm gonna have you start.

[00:01:13] Jess: Wonderful. I'm so excited. So our first mistake that we see, you know, just customers or people new to multivariate testing make is starting without a hypothesis.

[00:01:26] I think people get really eager to like jump into that test and like, see what

[00:01:30] Susan: I wanna test everything!

[00:01:31] Jess: Yeah, exactly, like, less is more. And when you start without a hypothesis, it really means like you have no clear picture of what you want to learn. Right? And without that, you don't actually know strategically what to test.

[00:01:47] So if you have a hypothesis, let's say I wanna find out if in my ad creative, images of women or men perform better. That right there is gonna tell you exactly what you need to test. You're gonna need a [00:02:00] couple images of women, a couple of images of men, and you're gonna have to kind of pit those against each other.

[00:02:05] And at the end you will have learned something, right? One of them will perform better. They'll perform kind of the same, or maybe you'll start to see like, oh, the dark-haired man and woman performed really well. So let's, let's go, let's dive a little deeper into that. Right? But in order to get to that point of knowing what assets you're gonna test and two, what you actually wanna learn, and then three, what to test next,

[00:02:32] you really have to start with that solid hypothesis of what you want to learn this time around.

[00:02:37] Susan: And I feel like that's left over from the AB testing mindset.

[00:02:40] Jess: For sure.

[00:02:41] Susan: Right? It's like normally you get two totally different things and it's like, let's see which ad is the winner. It's not approached from an asset perspective where it's like, you can see that granular.

[00:02:51] It's kind of like, which one of these does better let's scale. The one that does better. So it's one of those things that that's just how you've done it before. Like, you might [00:03:00] have a guess like, oh, I think this one's gonna do better but that's not really a hypothesis that you're testing.

[00:03:03] Yeah. It's just kind of like a coin flip as to what audience is gonna, like, what. So it's like when we talk about the mistakes that we, as media buyers learn through years of conditioning, I think that that's definitely one of them and it's, and it, and it is hard. Like you said, once someone realizes, Hey, I can get all of this data.

[00:03:21] I wanna get data on everything. And it's like, but then you're not really gonna learn a lot. You know, you'll, you'll learn a lot less if you try and get data about everything, versus if you go in pretty laser-focused, knowing what you wanna learn. And that's one of the things that we talk with clients about a lot is it's like, they get so excited about that first test,

[00:03:38] they forget they'll have a million tests after that, right? Yeah. It's like, it doesn't all have to be done in this one test. So taking a look at what it is you wanna learn. And then if it's, if it's too much being like, okay, this should be a subsequent test. This should be the test after that, like start breaking it down

[00:03:51] into sequential tests you know, based on what you wanna learn each time.

[00:03:56] Jess: Absolutely. And kind of knowing that, like going in, you're gonna learn one or two [00:04:00] things. Yeah. And, and let be okay with that. Right? Yeah. Like that is okay. That's, that's enough to know what to test next. So, and that's all you need.

[00:04:08] So I think, you know, tho that's a pretty that's a pretty easy mistake to fix and to kind of like realize and, and, and retrain yourself to do differently.

[00:04:20] So the second mistake that we have is overengineering your test based on assumptions. And again, I think this is another kind of like leftover practice from AB testing.

[00:04:32] Usually this comes in the form of like conflating variables. So you know, because we're showing an image of a plant in this ad, we should really only show it with the green background color. And that does not have to be the case. You might find that the image of that plant actually performs best with a yellow background.

[00:04:53] But you wouldn't know that, right. Or we've seen customers before, oh, you know, we're using an image of a plant and [00:05:00] so we should only use copy here that talks about sustainability or environmentally friendly. Right? And again, that doesn't have to be the case. Like a discount code and the plant might be the winning combo and you would never know until you tested that combo.

[00:05:15] So you know, I think, our big mission is to get people away from using opinions and assumptions to make creative decisions. And this is like just a mistake that we see people make over and over as we try and help them correct that because it's what they're used to. They're used to putting together one full concept against another full concept and testing those two things.

[00:05:38] And so it gets really hard to break every single element of that apart.

[00:05:44] Susan: So the thing that also applies here is really the most basic things that some brands take for granted.

[00:05:51] And the one that comes to mind was logos. They're just like, okay, we'll slap our logo on it. And they just use the same logo with their brand colors. The same thing they've [00:06:00] always used, cuz that's just, I mean, it's their logo. It's how they always do it. And no one has ever stopped to say, should we just use the, the black version of the logo because that might stand out better on this.

[00:06:10] Right? It's like, it's just never been done that way. So it's not even willfully not doing those things. It's just that it's never been evaluated because it didn't matter before. It was like, well, the logo probably makes no difference anyway. And in AB testing, we really just need to find the winner. So like, let's just keep it to the things that we feel like we can control,

[00:06:26] and the logo is what it is. So I think that there's even things like that, where you have to remind brands and customers that everything in this is testable. Like even the things that you just do, you know, almost by muscle memory at this point, all of that is still is totally fair game for testing.

[00:06:44] Jess: Absolutely. We, we have a healthcare startup that's one of our customers and, and that is the exact example. They tested their full-color logo against their white logo. They used the full-color logo on everything and have for over a year, but it turned out that the white [00:07:00] logo dropped CPA by $60. Yeah. And they were blown away and now they've replaced that everywhere else. Right? Because it's like, if that's the one that people resonate with, why wouldn't you use it?

[00:07:11] Susan: Exactly. So let's talk about another mistake.

[00:07:13] Jess: Okay.

[00:07:14] Susan: And I feel like the next two I'm gonna cover are just two opposite ends of the spectrum. Third mistake that we see a lot is making decisions too soon on testing.

[00:07:26] And this actually, I mean, this does happen in the world of AB testing as well, but you have to be careful about when you decide to make optimizations or decisions about what you'll be running and what you turn off halfway through a test. You know, we've seen people do it, they get, like, three purchases.

[00:07:45] And they're like, K this one's winning and they turn off everything. Right? So it's kind of like you know, there's a temptation there because now you're getting this data you've never had. And so you're like eager to act on it and you're like, wow, I can do something with this. And it's, it goes back to the just cuz you could doesn't mean you [00:08:00] should yet.

[00:08:02] But that mistake is really closely tied to the fourth mistake, which is overvaluing the role of statistical significance. So, this is where I feel like mistake three and four, you have to find where that right place is for you. You can definitely optimize it too soon and you can definitely wait longer than you have to.

[00:08:23] You know, I understand why people kind of obsess over stats, but I feel like it's something you rely on a little bit more early on in your relationship with the brand or your testing, because you don't ... Excuse you.

[00:08:37] Jess: Petunia needs to be heard.

[00:08:41] Susan: Please lay back down. You were sleeping. It was fine. Okay. So when you worked with a brand, you know, for a certain period of time, you start to get familiar with sort of, where are the benchmarks of like what's average for them when they run an ad, what's really far out of the realm [00:09:00] of normal for great performance,

[00:09:01] and what's really far out of the realm for poor performance. You start to kind of just know where those things lie when you've been doing it long enough. So, you know, there is that middle ground between like, sure you don't optimize after there's four purchases, but you don't have to wait for 400 either.

[00:09:16] Right? Where that is, like, I feel like there's a lot of questions about like how long should I let it run? And the truth is, it just depends on what's normal for that account. So, you know, I have an account that I manage where I know that their average CPA in Facebook is $75. Sure, stuff comes in at $120.

[00:09:33] Sometimes they'll have a sale and stuff comes in at like $22. Right? So I know that there's this wide band, but I know on a normal basis, if we launch something in Marpipe and it's got, you know, three to five different ad sets with creatives running, and one of them is generating like a $75 CPA, which is normal to above average for them,

[00:09:57] and another one of them is [00:10:00] like one purchase for $500, the odds of that one catching up to the one that's doing well is so slim just because I know how their stuff works. I know that whenever I launch stuff, it usually within the first 72 hours is gonna show promise or it just never does well, no matter what.

[00:10:17] So I feel more confident with that brand being like, nope, I can go ahead and turn this off cause I know this is never gonna catch back up. Like it's just not gonna perform. Brands that I'm not as familiar with though, I wouldn't have that comfort level because I don't know. I'm like, it could be that this happens sometimes and it takes a week or two for it to find its footing.

[00:10:34] Like, I don't know. So those two mistakes, you know, I feel like you really have to temper with your knowledge as a media buyer, your knowledge of that brand and what their averages are like, what their costs are that has to play a large role in you determining am I optimizing this too soon? Or am I just like waiting 'til the cows come home?

[00:10:52] And I really don't have to. So some of that's, you know, declarative in nature where it's like, don't do it too soon. Don't do it too late. But where [00:11:00] that is depends on the brand.

[00:11:01] Jess: Yeah. And I think, you know, the more that you test, the more that you use multivariate testing, the more comfortable you get with these things. You are going to make mistakes.

[00:11:10] Susan: Yeah.

[00:11:10] Jess: Especially early on, like that's just to be expected. I think the good news is the more you do it, the less likely you are to make mistakes, the more valid the information is going to be. And, and the better you get at knowing what to do next, right?

[00:11:25] Susan: Yeah.

[00:11:25] Jess: Our customer success team does an amazing job of that, but we also have a lot of customers who have worked with us long enough to like, okay, nope, I got it now. Like I know when I learn this, I should dig into it a little deeper, and that means I'm going to test this next. And so, yeah,

[00:11:39] Susan: I feel like, I feel like that's always kind of a big question mark, for a lot of places at first.

[00:11:43] Like even when we're just talking to them initially about figuring out if Marpipe's even a good fit for them, is that there's always kind of this question of like, well, why would I need it ongoing? Like, wouldn't I just run a test and learn what I need to know. And it's like, well, no, cause that's what you're used to with AB testing.

[00:11:56] You're like, well I just run a test and then I know. And it's like, it's a [00:12:00] lot more layered than that once you start doing multivariate, cuz it's like, you'll learn one or two things. That doesn't mean that you've learned everything you wanna know. Yeah. Now there's all these other elements you can learn. So like, you're gonna have a lot more tests you can run than you normally would, but because people aren't conditioned to thinking about it that way,

[00:12:14] they still have that AB testing mindset where they're like, I'm not gonna need this for six months. You know, I only need to run one or two tests. And then to your point, it's like, once they run one or two, they start getting into the rhythm of like, okay, this one, so I'm gonna carry this over. So now I wanna test these things against it.

[00:12:29] So it's just like this process of carrying over the things you learn and doing a new test each time that makes it a process.

[00:12:36] Jess: Absolutely. And the thing with multivariate testing is you, you can use it to start stacking performance improvements incrementally, right?

[00:12:42] So like each test, the hope is you get just a little bit better and a little bit better and a little bit better. And pretty soon you've gone from, you know, six months ago where you were testing and you now have kind of this incrementally gained improvement over time because you just kept going.

[00:12:58] Yeah. Unlike an AB [00:13:00] test where, like you said, you just do it once and this one won and that's what we're gonna run. And the next time we have a product launch or a sale, we'll try it again. Right?

[00:13:08] It it's, it's much more of a, of a consistent type of testing and therefore much more powerful.

[00:13:14] Susan: Yep. Agreed.

[00:13:15] Jess: Amazing. This was a lovely conversation, Susan.

[00:13:21] Susan: It's fun to talk about screwing up, isn't it?

[00:13:23] Jess: I know it makes me feel so much better about myself. Yes. Mistakes will be made, but it's okay. You can learn from them.

[00:13:31] Susan: That's marketing, baby.

[00:13:33] Jess: That's marketing, baby.

[00:13:34] Susan: Making a bunch of mistakes to figure out what works

[00:13:36] Jess: That's right. It's all a giant test.

[00:13:38] Susan: It is.

[00:13:40] Jess: Well, thank you for joining us. We'll be back with another episode of Resting Ad Face.

[00:13:47] Susan: See you next time.

[00:13:48] Jess: Bye.

[00:13:49] Susan: Bye.

Boost ad performance in days with a 7 day free trial.
Claim Trial

How to Run a Multivariate Test

The Beginner's Guide

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Tiffany Johnson Headshot

How to Run a Multivariate Test
The Beginner's Guide

Plus, Get our Weekly
Experimentation newsletter!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Outliers is the weekly newsletter that over 10,000 marketers rely on to get new data and tactics about creative testing.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.