Beta

Sequential testing vs. testing all of the things at once

Sequential testing is a popular ad testing approach for many marketers today. But its biggest downfall is its lack of speed. There’s got to be a better way!
Jess Cook

Sequential testing, as applied to ad creative, involves testing different creative elements in your ads one at a time. For example, you might first test a handful of images while keeping the copy, background color, and offer the same. Upon finding a winning image, you might move on to testing headline variants with the winning image, still keeping the background color and offer the same. And so on.

Testing sequentially is a popular approach for many advertisers today. It’s methodical. It’s controlled. And it can tell you a great deal about which creative assets are the main drivers of your ad performance.

But the biggest downfall of sequential testing is its lack of speed. Testing one element at a time can take months. (Not to mention the hassle of strategically naming and organizing assets and ad sets, as well as tracking ongoing test results.)

*Cue informercial-level exasperation.* There’s got to be a better way!

On the latest episode of Resting Ad Face, our VP of Performance Marketing, Susan Wenograd, and I talk through the pros and cons of sequential ad testing, plus how automated multivariate testing (MVT) is taking your creative testing and ad optimization to the next level.

Here are some highlights:

Sequential testing is better than A/B testing, but leaves you wondering, “What if?”

While sequential testing delivers a deeper level of creative data than A/B testing, it can still leave marketers with a ton of unanswered questions.

For example, you test your images and find a winner, then test your copy and find a winner. You might have a winning combo. But what if, together, a non-winning image and a non-winning headline were some sort of ad performance power couple?

The notion that “the whole is greater than the sum of the parts” can never be fully explored in sequential testing, leaving you open to missed performance opportunities.

Multivariate testing answers the “what ifs” of creative testing.

Automated multivariate testing lets you test numerous creative elements all at once, so you can discover which combinations of assets drive the best results, and do so faster than sequential testing.

MVT also opens up a world of new possibilities for creative insight. You can see, for example, which headlines are most effective with which images and calls to action. And that magic can only happen when you test everything one at a time, at the same time.

Automation makes multivariate testing humanly possible.

Multivariate testing ad creative can technically be done manually. But it’s no walk in the park.

Advertisers would need to create assets for each potential combination, then name and organize them in a way that’s both logical and manageable. And that’s all before launching a perfectly controlled test and tracking the results.

But with an automated MVT tool like Marpipe, all you have to do is upload your assets, generate your ad variants, set your test criteria, and watch the magic unfold. Statistical significance is calculated and reported automatically, creative intelligence is aggregated in one place, and performance impact is reported in black and white. (Well actually red and green, but you get the idea.) 

For more, subscribe to Resting Ad Face on YouTube or your favorite podcast platform.

Transcript

[00:00:05] Susan: Hey everybody. Welcome back to Resting Ad Face. I am Susan Wenograd, the VP of Marketing here at Marpipe, and I am joined by my amazing coworker, Jess Cook, who is our VP of Brand and Content. 

[00:00:19] Jess: Hello. Hello. Welcome. 

[00:00:21] Susan: We are here to talk about sequential A/B testing today, and then testing all of the things pretty much. 

[00:00:29] Jess: Yeah. Yeah. Like, like can you do everything all at once or is it better to like, just test one thing at a time, which is what a lot of people do. Yeah. 

[00:00:40] Susan: Yeah. I mean, so I think let's, let's start with the one at a time thing, right?

[00:00:45] So, and this was how I always kind of had to do it before. So because you can't like, isolate data outside of a platform like Marpipe, what you're forced to do is say, Okay, let's assume that the [00:01:00] visual is the most high-impact thing in the ad, right? So you take the same, you know, overlays, same ad copy, like basically everything is a control element.

[00:01:11] Except for the visual or video. So that way it's like there's nothing else skewing it. You know that the only reason that there's a difference in the data is because of what the visual is. So it's like, first you figure out what is the winning visual. Then once that test is over, then you take that visual and then that's when you're like, Okay, I'm gonna now test the, you know, the headline that's over it.

[00:01:31] So then you might make like three different versions where it's the same visual, the same paragraph, like text copy. But then the only thing that's different is what's overlaid on the image. Then you test that, et cetera, et cetera. So you have to do it for, you know, like every part of the ad. And then ostensibly at the end you have the winning elements from each test that you then combine into the one creative to rule them all, in theory.

[00:01:56] That's the idea. Anyway. So that's, [00:02:00] I think, traditionally you know, what we've had to do without tools to help us do it any other way. 

[00:02:05] Jess: Yeah. But here's the thing, what happens when. And you would never know this because you can't pair them together. Right? So you test your image, you find your winner, you test the copy, you find your winner.

[00:02:18] But like my brain would always go, but like, what if the this copy and this image that didn't win last time were like a power couple? Yeah. And if I put them together, they did even better than like the two winners I have put together. Right? Yeah. Like that "what if" would always be the back of my mind.

[00:02:35] Susan: The analysis paralysis is real too, because then you're just like, well, maybe this headline would've done better. You know, like my headline that I came up with that was so witty and I loved so much, like, maybe it would've done better with one of these different visuals. Right. So it's like, it's so easy to just get, you know, into that like anything could actually work.

[00:02:55] It's just that we're not equipped to be able to test it that way. 

[00:02:58] Jess: Yeah, for sure. [00:03:00] What are, what are like the pros of testing in that sequential order? 

[00:03:05] Susan: So it's, it's very controlled, right? I mean, you can assume that you know why something is working. It just removes that kind of variable piece of it.

[00:03:18] I think the other thing is too, it's like when you just test three completely different creatives and then one wins, a lot of times brands will be like, oh, so we need to do that format. We have to, like, they keep trying to replicate that one version and then the other ones don't do as well. So because you don't really have any sense of.

[00:03:35] What drove why it did well? Was it the visual? Was it the headline? You don't really know how to capitalize on it. It's like you spent a lot of money testing this. You found a winner. So like the pro is you have this one that you're like, Great, this thing's a winner. I can run it all day long. It's like I have a large client where there's this one creative that I, I could not beat it.

[00:03:54] It ran for like eight months. It was crazy and like it all, and not even by a small margin, [00:04:00] like it always won. Yeah. And so it's nice when you find one of those horses that you can just kind of keep riding across the finish line. You know, the, the downside to that obviously is like, yay, I have this winner and I can like, push all my chips in the middle of the table.

[00:04:16] So in some ways it makes your life a little easier. But, you know, the, the drawback obviously is that you don't know why that one is winning. It's like, I can assume it's the visual, but I took the same visual and paired it with other stuff and it didn't do as well. So it's, that's the, you know, that's always the drawback.

[00:04:33] I mean, the pros are that it is a controlled environment, so you don't have as much uncertainty. Like we said, there's still the temptation to be like, but what if? Yeah. You know, so you're not necessarily gonna have total certainty, but you're, you feel like you're a little more informed data wise than you would be if you just kind of made three completely different concepts and, and put them forward.

[00:04:52] Jess: Yeah, for sure. So in like the realm of good, better, best, like AB test is good, better is like this kind of [00:05:00] sequential, right? Yeah. I think the thing too is like you, you have to know that there's gonna be missed opportunities, but like, it is better, right? It's a better way to do it. So it's like, that's as good as I can get and it's, it's giving me more information than if I had done it in A/B and so that's kind of like, I'm, I'm gonna have to be okay with that. It's good enough. 

[00:05:24] Susan: Right. The other, the other factor with it honestly, is also sometimes just time. So it's, you know, if you're running like a two to three day sale, you don't have time to do the sequential testing. It's like, it is such a compressed time period.

[00:05:35] So that's the other thing that can affect that. It's like if you're, if you're testing just evergreen ads that you can run any time of year, it's much easier to be able to set up a controlled test like that. Yes. Versus there's some insights you can get previous to a sale or something, but sometimes it's really hard.

[00:05:51] I mean, it's, it is hard to run Black Friday creative prior to Black Friday. It just is what it is, you know, it's like you can't really [00:06:00] run that ahead of time. Some things you have a little more leeway, like if it's Mother's Day, you know, people, they start messaging the month before, right? So you have a little more time to figure that out.

[00:06:08] But some things that are just those super quick, like, Memorial Day weekend sale, Labor Day weekend sale, President's Day weekend. You know, like those things, especially like with, you know I always think of home decor because they have kind of different. Cycles for selling that. I mean, they do Black Friday and stuff, but like mattresses are huge in Yeah.

[00:06:28] March, April because of tax refunds. So they kinda have weird, you know, little proclivities like that. And there's just not a lot of time to test that stuff. So that's kind of the other factor of, of what can decide that it's not even necessarily how you wanna do it, it's do you really have the time to do the sequential testing option?

[00:06:43] Jess: Yeah, absolutely. So, okay. So we go from there. Then if we've got like good and better, and then best is like, Okay, I'm just gonna do all of that at once. And that's kind of where multivariate testing comes in, right? Like, I'm gonna take all of [00:07:00] those kind of, you know, variables that I tested sequentially, and I'm going to compress it into one test.

[00:07:05] Yep. So I'm going to find out a lot of information very quickly. Mm-hmm. And I'm gonna be able to find maybe those missed opportunities that I wouldn't have found through sequential testing because I didn't have the chance to like pair this headline that maybe wasn't the front runner with this image that was.

[00:07:25] Right. Right. So so let's talk about that a little bit. Like some of the, the pros and cons there. 

[00:07:31] Susan: So, I mean, the pros are obviously, it, it removes a lot of that uncertainty, right? Yeah. So that, that piece that you can't solve for with the other two types of testing that goes away, essentially. The con is just the lift involved.

[00:07:46] Because, because of the way Facebook distributes spend. You can't set everything up to be in one ad set or it's gonna be too many ads, and it basically will only ever give impressions to [00:08:00] maybe three, but it ends up picking a winner like right away. And it, it'll spend 99% of your spend towards the one.

[00:08:08] And that, I mean, that's fine if Facebook is right and that one's the winner, but that's not what, you're not gonna learn anything from it. Again, you're back in that, okay, this one won, great, what do I learn from it? Nothing. Right? So, In order to do it, it's, there's, there's really two levels of, of heavy lift with it.

[00:08:22] One is that every creative has to go into their own ad set, so you're gonna be creating an ad set and then duplicating it. And so you're gonna have to plan out like how many versions total is there gonna be. You have to create the number of ad sets for those. So that's the first. There's just the media buyer, you know, lift of how manual that is. Yeah. And then there's also the designer piece, where it's like, hey, you gotta create, you know, 30, 40, 50 pieces. And so getting all those made, making sure that you have every combination possible thinking through like, Okay, did this headline truly get matched with every [00:09:00] visual? So it's like, if you do it manually, I'd still say there's limitations in that, the human ability to, like, expertly make sure you have all 50, 60, 70 versions done. Yeah. Is tough. Yeah. And then you're having to go and like create them in every single Facebook ad and, you know, make sure you're cutting and pasting the right text copy with each one. Like it's just so much detail orientation and some people are great at that.

[00:09:26] I'm not. So it's like the thought of doing that, I'm like, Oh, I would screw that up in 30 seconds. You know? It's like there's, I would not, I'd miss something for sure. 

[00:09:34] Jess: The logistics and like the tracking and, and organization of that. 

[00:09:39] Susan: Yeah. Cause then you have to think through, Okay. And this was kind of how I used to do it, not for multivariate testing, but just to aggregate data results, is I would come up with naming conventions and I literally had like a Google sheet that was a key that would be like headline one is this, headline two is this headline three, and then I'd have it for visuals.

[00:09:57] So every single name would would be like HL [00:10:00] one underscore whatever. So when I exported all that data, I could filter. So it's like, let me see how anything with H1 did. Let me see anything with H2 did. Cuz you can't do that in Facebook's interface. Right. So you have to be fanatical about how you name things. And the worst part is, it's like if one has an underscore and one doesn't, when you make a pivot table, they, they're listed as two separate things.

[00:10:21] So it's like you have to be, the amount of detail orientation it takes to make that work is really staggering. So take that and then multiply it by testing, you know, 10 headlines and five images and it's just like, Oh my God. I mean that try and wrap your mind around that is crazy. It's like, can it be done?

[00:10:36] Yes. Yeah. But it's also like you take the sequential testing idea and the time it takes and it's like, it's gonna take you longer than that to set it up than it is for you to run the test. 

[00:10:47] You know what I mean? 

[00:10:47] Jess: And then at that point it's like, is it worth the time? And I think that's why so many people sit with this sequential testing because it's better than A/B, it's humanly possible. It's good enough. 

[00:10:59] Susan: [00:11:00] Yeah. It's also a lot easier to figure out statistical significance cuz that's the other thing. Sure. When you have 50 ad sets and each have spent like 50 bucks, then you're going, Okay, so is this enough for it to matter? So then you're like going in and plugging it into a stat sig calculator to figure out like, how close am I?

[00:11:17] What's level of confidence? It's so much easier to look at like a sequential test with three elements and be like, not there yet. You know? It's like you can just market and you kinda know you can't do that when you have 50. It's just way too hard. 

[00:11:28] Jess: For sure. So now we bring automation into the equation, right?

[00:11:33] Mm-hmm. So like we're trying to do this multivariate thing. It's hard as shit. It's so many things. Yeah. My mind is like eggs and, and now I can bring in something like Marpipe, where like it's going to create all of those different variants for me. So now I very quickly have, let's say 50 to 60 ads. Mm-hmm.

[00:11:56] It's going to let me launch that [00:12:00] test right from there into Facebook. It's going to calculate stat sig for me. Right? So like it goes from being like, yes, humanly possible, but like, not, 

[00:12:11] Susan: not desirable. 

[00:12:12] Jess: Not desirable. 

[00:12:13] Susan: Not humanly desirable. 

[00:12:14] Jess: No, not efficient. Really not worth the time. To like, okay, now I'm getting this new level of information. Yeah. And I'm not killing my people to do it. 

[00:12:24] Susan: Yeah, yeah, exactly. I mean, that's, that's the, that's the piece of it is just the, the detail orientation, the strategic part and then the execution are all pretty painful. I hate to say it, but I'm like, all of it's very painful if you don't have something to kind of finesse all of that for you, you know?

[00:12:41] Yeah. And I mean, that's, again, that's what Marpipe does. That's why I love it, because it's. You know, those, it takes the options that formally have their drawbacks and it helps fix some of them. But it also just fixes the lift of like, you know, the, obviously the fix is to be able to multivariate test, ultimately.

[00:12:58] But it makes it possible [00:13:00] because obviously it is not, not how you probably want your people spending their time. I mean, that's the funniest part is that most of it's just grunt work to get it done. Yeah. It's not, you know, it's like the strategic piece of it doesn't take that long. It's the manufacturing and production of it that takes so long and that's like the least valuable use of your team's time usually. Yes. So it's like you're spending this bulk of it using your resources on something that's time consuming. And the yield for the data is important, but it's like, is it so important that you want them spending like 30 hours of time doing this, you know?

[00:13:32] Maybe not. 

[00:13:33] Jess: Yeah, definitely not. Yeah. I think the thing too, I would love to know your experience about when you're doing sequential testing manually. Yeah. How are you tracking the results? Like, how are you, are you able to like build something based on that? Meaning like something that you can look back and historically say like, this works, this doesn't kind of as a generality for a brand.

[00:13:54] Susan: I mean, you know, you can document it somewhere, but it's not, it's not like [00:14:00] a central repository.

[00:14:01] That's the same for every brand, because every brand, like they communicate stuff to, some of them like log things in Notion, some of them use Slack, some of them use Google Sheets. Like there's no place that most people will just like go into and look and be like, Ah, here's, here's the advertising key

[00:14:16] with all of our knowledge and everything we've learned. That was like a lot of it just kind of becomes institutional knowledge. Sure. And then even after a while that gets really outdated. You know, it's like you'll, when you work with brands, a lot of times you'll kind of hear these urban legends about like, when we tested that and it didn't work. So it's like they, there's just these, like this tribal knowledge type thing that you start to realize really isn't real.

[00:14:36] It just feels real because that was their experience at the time. Yeah. So that's kind of the other piece too. And there's no real data to look at. Like you can try and look in the Facebook ad account, but most buyers are not fanatical about how they name things. So when you look at it, you don't even know what you're looking at.

[00:14:51] It's like you have to go into every single ad and be like, what's the targeting? And then go like, you have to manually, there's no really easy way to know that. So that's kind [00:15:00] of the other piece is because it's not easy to do on Facebook, if you take over an account what's there is sometimes not immediately

[00:15:07] evident at all. So, yeah, the institutional knowledge piece is a real challenge. 

[00:15:11] Jess: And I, but I think though, like the brands that are able to figure that out are the ones that are going to win. Right? Yeah. Because I think when you get those, right, it makes everything cheaper. Yeah. Like it just makes everything work better for less, right? Yeah. So, You know, things like knowing, hey, this particular shade of green does really well for us, or like any headline we do about this specific feature of the product always performs really well. And like just knowing that historically, so you can kind of hand that almost as part of your brief to anyone who touches the brand 

[00:15:50] Susan: like makes it so much easier. You're just not, you're not wasting time, you're not wasting data, you know, you're not wasting money on data.

[00:15:56] All that. Yeah. 

[00:15:57] Jess: You're all working from the same kind of [00:16:00] North Star in terms of creative. 

[00:16:02] Susan: Exactly that. 

[00:16:05] Jess: Well, that was it. We just wanted to share all with all of you. This idea, this is how most people are testing, right? This kind of sequential way and you know, it works. But there are better ways .There is a better way. And 

[00:16:21] Susan: Better living through tech.

[00:16:23] Jess: Better living through tech. There's like 17 companies with that as a tagline.

[00:16:33] So yeah, look into it. It saves your team time. The data you get is really incredible. 

[00:16:40] Susan: There's such a sun cost and team time too that I feel like no one accounts for. They just kind of account for like the media cost, but it's like, staff time costs money. It's expensive to have people doing things. So I feel like that gets so easily overlooked. 

[00:16:51] Jess: Absolutely. Or like, or just like arguing over like, No, this color, No this color. Like you eliminate that, right? Like there's so many like kind of [00:17:00] long-term, unseen things Yeah. That this type of testing just completely removes from the equation. So agree. Yeah.

[00:17:07] Besides just like the short term performance you have to think long term too. Like how is this gonna help us as a brand? So.

[00:17:13] Susan: Yeah. Totally. 

[00:17:14] Jess: All right, friends, thank you for listening to Resting Ad Face. Susan and I will be back. 

[00:17:21] Susan: See you next time. 

[00:17:22] Jess: Bye .

[00:17:23] Susan: Bye.

Boost ad performance in days with a 7 day free trial.
Claim Trial

How to Run a Multivariate Test

The Beginner's Guide

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Tiffany Johnson Headshot

How to Run a Multivariate Test
The Beginner's Guide

Plus, Get our Weekly
Experimentation newsletter!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Outliers is the weekly newsletter that over 10,000 marketers rely on to get new data and tactics about creative testing.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Sign Up to Outliers 💌

Join 7,000+ marketers in the world’s best newsletter all about catalog ads!
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.