Beta
Chapter One

Chapter One

Creative intelligence: the new must-have for marketers

For decades, marketers and advertisers relied almost exclusively on audience targeting data to help them make decisions about which ads to run and where. It made building ad creative and buying ad space very easy.

The old paradigm: data mining

If you knew the previous purchases and user behavior of someone who looked like your customers demographically, you could predict with near certainty the chances of them buying your product or service. All you had to do was serve those who were most likely to convert the right ad at the right time — all information marketers had at their disposal, thanks to audience targeting.

But in recent years, the importance of user privacy has rightfully started to outweigh the needs of marketers. Companies and governments have implemented mandates, preventing invasive audience targeting. The introduction of Apple’s tracking opt-in inside iOS 14, the California Consumer Privacy Act (CCPA), and the General Data Protection Regulation (GDPR) are all a direct result of this shift away from mining user data for commercial use.

The new paradigm: creative intelligence

Creative intelligence, on the other hand, has always been more of a nice-to-have, rather than an innovative decision-driving tool.

But now, two factors are turning creative intelligence into the new must-have for marketers:

1.
Unlike audience data, ad creative will always be in the marketers’ control.
2.
Less reliable audience targeting requires better (i.e. more relevant, engaging, action-driving) ad creative.

Surviving the paradigm shift

As a result of invasive targeting coming to an end, our target audiences become broader. It will be the brands with the most informed, data-driven ad creative — those who continuously test creative and scale top-performing ads — that will win the day.

This isn’t to say that the art of crafting ad creative is dead. There will always be a need for brilliant art directors and copywriters to develop the concept and design the execution. But to build better creative you need deep, solid creative intelligence.

The creatives behind our favorite ad campaigns are beginning to look more like scientists and engineers than ever before. And they’re backing their art with science; not just by testing one ad concept against another.

Using an ad creative testing approach called multivariate testing (MVT), today’s most innovative marketers test ad creative at scale and get insights into what’s working — and what’s not — right down to the headlines, images, and background colors.

01.

What is multivariate testing?

Multivariate testing measures the performance of every possible combination of creative variables. Variables are any single element within an ad — images, headlines, logo variations, calls to action, etc.

Because we can measure how every variable works with every other variable, we are able to understand not only which ads people love and hate the most, but also exactly which variables people love or hate the most.

For example, let’s say while concepting an ad campaign, your creative team came up with the following:

4 headlines

3 images

2 background colors

3 calls to action

To run a multivariate test against these assets, you’d build an ad for every possible combination — all 72 of them (4x3x2x3) — and test them against each other.

Example of a multivariate test with 4 headlines, 3 images, 2 background colors, 3 calls to action.

There are insights you can only learn from testing in this way. From the test above, for example, you might learn things like:

Headline 2 outperforms the
others by 200%
1.
Image 1 increases engagement by 56% …
2.
… while image 3 doubles your conversion rate
3.
Call to action 2 didn’t
convert at all
4.
Background color 1
increased CTR by 17%
5.
Ad 24 was the overall winner, outperforming the next best ad by 11%
6.
The true value of multivariate testing lies in the creative intelligence it delivers, and the ability to use those learnings in future creative. If you knew that a certain color or image or text on a button consistently drove more people to purchase, you could capitalize on that knowledge until a new winner emerged.
In theory, multivariate testing ads is simple — it’s the scientific method applied to creative.
The reality is, building that many ads is no small feat for any creative team. This is why automation and a design approach called “modularity” is crucial in multivariate testing. (More on those in a minute.)
02.

Multivariate vs. A/B testing

Multivariate testing changes the number of ads, state of variables, and granularity of test results.
Multivariate testing and A/B testing are both valid ways to learn something about your ad creative. They both:

Test ad creative against other ad creative

Measure how well ad creative performs against a goal (conversions, engagement, etc.)

Where A/B testing and multivariate testing differ is in:

The number of ads typically run in each test

The state of the variables in each test

The granularity of the results of each test

A/B testing measures the performance of two or more markedly different creative concepts against each other. Long the standard in testing ad creative, this has historically been the way brands decide which ad or ad campaign to run against their media buy. Today, many brands run their ad creative through some form of A/B testing before deploying their creative live.
A/B testing in action:  measuring the performance of two markedly different creative concepts.
But the greatest shortcoming of A/B tests is the sheer number of uncontrolled variables — each ad concept in the test is typically wildly different from the others. So while A/B testing can tell you which ad concept people prefer, it can’t tell you why. Multivariate testing can. This is why A/B testing alone is no longer enough.
In today’s advertising ecosystem, using multivariate testing is the only way to derive the micro-level insights marketers need to boost ad performance. Together, A/B testing and multivariate testing are a powerful pair. The first tells you which ad concept works best, and the second tells you which possible combination of creative elements and variants within that concept drives the greatest number of conversions.
A/B testing
Multivariate testing

Number of ads per test

2-3

Tens or hundreds

State of variables in each test

Very few or zero variables are controlled

Example: Two distinct ad concepts, each with different image styles, typography, headlines, and color palettes.

All variables are controlled


Example: 12 ad designs featuring every possible combination of two headlines, three images, and two background colors.

Granularity of results

Macro: solely measures the performance of ad against ad

Tells you which ad won but cannot reveal why

Micro: measures the performance of every ad against every other ad and every element within each ad against every other version of that element

Tells you how each ad, and each element of each ad, performed

Best for

Understanding which overall ad concept performs best

Understanding which ads and which elements within your ads perform best

03.

Marpipe: Test hundreds of ads with full creative freedom

The key to successful multivariate testing is scale through automation. Simply put: the more ads you test, the more likely you are to find winners. Plus, multivariate testing at scale delivers micro-level creative intelligence you can’t get anywhere else, building a custom library of insights you and your team can use to build powerful, data-driven ad creative well into the future.
To put a finer point on it: multivariate testing does not “automate creativity.” In a world where ad creative is more important than ever, multivariate testing empowers creatives and marketers with the granular data they need to craft the right ad creative.
Marpipe app interface
Marpipe App
Marpipe is the only multivariate ad creative testing platform that lets you build ad variations and launch tests — in front of your actual target audiences — automatically. The remainder of this guide will walk you through, step-by-step, how Marpipe can help you become a multivariate testing pro, find winning ads fast, and uncover ad creative testing results you can’t get any other way.
Chapter Two
Begin with the end in mind: long-term creative intelligence goals

Chapter Two

Begin with the end in mind: long-term creative testing goals

01.

Defining long-term success

Before you jump headfirst into multivariate testing on Marpipe, take some time to understand what your marketing team hopes to accomplish on the platform. Beginning with the end in mind will help you plan and build your tests in Marpipe more strategically, all with an aim of reaching that final goal.

Determine how and why you want to test ad creative on Marpipe

Begin with your top objectives: what area or areas of the brand or business are you hoping to affect with better ad creative?
Here are a handful of example objectives from Marpipe customers. Some focus on just one while others
concentrate on a select two or three.
Prospecting: What works best with current customers isn’t always what works best with new ones. Multivariate testing can help you find the best possible combination of creative elements to bring new people to your brand.
Retaining current customers: Once they’ve made their first purchase, run multivariate tests to find the ad creative that keeps them coming back.
Launching new products and product lines: Set your new products and product lines up for success. Find winning ads and creative elements that drive people to purchase your latest offerings.
Testing seasonal designs and messages: Find out which creative elements work best to draw people into your biggest deals and holiday sales of the year.
Refining your brand’s overall look and feel: You’re not married to your brand’s design system or you’re a relatively new brand still building your look and feel. Why not let your customers decide what they like best?
Building a database of historical creative intelligence for your brand: Every test adds another layer of depth to your brand’s pool of creative intelligence. Over time, you can build insights that hold true not just for ads but for your brand as an entity.

What’s your creative testing type?

Another way to help you understand your mission on Marpipe is to take a deeper look at your current ad-testing tendencies. We’ve found that our customers usually fall into one of three main types:

The Accelerator

Accelerators run smaller tests in rapid succession. They tend to have higher creative fatigue and are regularly looking for new high-performers. They optimize their ad creative and scale their winners ASAP after testing results roll in. Their end goal is to find new, top-performing ad creative.

The Researcher

Researchers run larger, methodically planned tests. They tend to have strong, evergreen performers that only need to be updated incrementally, month over month. They use the data they collect to inform their long-term creative strategy. Their end goal is to continuously refine top-performing ad creative.

The Hybrid

Hybrid testers exhibit characteristics of both the Accelerator and the Researcher in parallel, depending on their goals. They may run small, rapid tests to see how ads for a new product perform while also planning a larger, more comprehensive test against multiple categories of product imagery.
Which category does your marketing team fit into? With your current ad testing process in mind, take this quick quiz to find out.
02.

Reaching statistical significance on Marpipe

Marpipe is the only automated ad testing platform with a live statistical significance calculator built right in. We call it the Confidence Meter, and it tells you if your data for each variant group is scientifically proven — or not.
Each multivariate test run on Marpipe contains multiple variant groups — some of which reach high confidence sooner than others. When a variant group reaches high confidence, it means you have enough data to make creative decisions. And when enough variant groups reach a high confidence level, you can move on to your next test.
Look to the Confidence Meter to understand:

whether or not a variant group has reached high confidence

if further testing for a certain variant group is necessary

whether repeating the test again would result in a similar distribution of data

when you have enough information to move on to your next test

The Confidence Meter does NOT tell you that:

one variant or variant group is the all-time best (or worst)

a variant group will always (or never) impact your KPIs

there’s no need to challenge winners in subsequent tests

How to read the Confidence Meter

Gray means:

0–55% confidence; fluctuations in performance are likely due to chance

further testing for this variant group is necessary

you do not have enough information to move on to your next test

try testing variants with more substantial differences between them

Yellow means:

56–79% confidence; fluctuations in performance might be due to chance

further testing for this variant group is necessary

you do not have enough information to identify a definitive winning or losing ad or creative element

try looking at another KPI or continue to put spend behind this test to reach high confidence

Mild Confidence rating on the Confidence Meter, noting that there is evidence to verify that the group has influence over performance but more data is needed to verify that these results are not due to chance.

Green means:

80–100% confidence; fluctuations in performance are not due to chance

further testing for this variant group is not necessary

if enough variant groups are green, you have enough information to move on to your next test

continue to challenge winning elements and drop low performers in future tests

High Confidence rating on the Confidence Meter, noting that this group has a substantial influence over performance. It is unlikely that the performance differences disclosed are due to chance
03.

Implications for all brand creative

One of the greatest benefits of investing in multivariate testing is it serves as a proving ground for your entire brand. Not only do the insights learned here pertain to your ad creative, they can also be applied holistically to all your properties:

Winning headlines can be applied to your home page and landing pages

Winning images can be used to inform future photoshoots

Winning colors can be incorporated into your packaging

Winning ad
Winning ad
Landing page
Landing page
For customers with a long-term vision, Marpipe isn’t just a multivariate ad testing platform. It’s a new form of real-time market research, and the application of creative insights across your entire brand is endless.

Chapter Three

Getting started: forming a hypothesis and organizing your assets

Every great test on Marpipe begins with a hypothesis, a modular ad concept, and a strategically selected set of assets.
01.

The importance of a solid hypothesis

Asking yourself, “What do I want to learn here?” is the very first step in kicking off a multivariate test on Marpipe. A well-thought-out hypothesis will help inform what assets — images, headlines, calls to action, etc. — should be tested in your ad creative.
Here are a few examples of hypotheses from our customers:
Image variants to test: images of a product being used by a model or images of the product by itself.
I want to see which generates more conversions: images of my product being used by a model or images of the product by itself.
Offer variants to text: 50% off vs. $34 off
I want to see how customers react to different types of offers and discounts; 50% off vs. $34 off, for example.
Background color and pattern variants to test
I want to see which background colors and patterns generate more leads.
Variants to test: how do color of the product and the call to action on the button affect the conversion rate?
I want to see how the color of the product and the call to action on the button affect my conversion rate.
Testing whether the CPA of a current top-performing ad could get any lower with a few small tweaks.
I want to see if the CPA of our current top-performing ad could get any lower with a few small tweaks.
Now that we know what we want to learn, we’re ready to start designing the modular ad concept and prepping the assets that will get us to an answer.
02.

Modular design: the design approach for multivariate ad testing

What is modular design?

Modular design is a design approach in which placeholders within a template hold space for creative elements to live interchangeably. It’s a foundational pillar of designing ads at scale on Marpipe, allowing multiple creative elements to be swapped in programmatically.
Beyond scale, modular design is equally as important in multivariate testing ad creative on Marpipe. It’s what allows each design element to be paired with all other design elements, controlling all your variables for an effective test.
Chart pointing to design elements such as logo, image, headline, CTA

Within Marpipe, you can break your placeholders down into two types:

Variable: Any placeholder for an asset you want to test. The assets that live here should be interchangeable with one another. In other words, you can swap any asset of the same type into the placeholder and the design still works.
Fixed: Any placeholder you want to keep the same across every ad variation. These assets should work cohesively with your variable assets that are being swapped in automatically. One common example is keeping your brand logo the same size, color, and location in each variation.
Chart showing design elements, identifying variables and fixes

Categorizing assets

How do you build modular ad creative for testing on Marpipe? Start by categorizing your assets.
Every single asset type can be categorized according to its attributes — a process that will yield huge dividends as you choose assets for testing a hypothesis (“Let’s see how images of ‘smiling models’ and ‘serious models’ affect performance,”) and as you continue to build your database of creative intelligence (“Historically we see that ‘product shot in studio on black’ always outperforms ‘product shot in studio on white.’”)
Here’s an example of how you might categorize image assets for a health and beauty product:

Product only

Product in packaging

Product in packaging with props

Product in packaging on white

Product in packaging on black

Product out of packaging

Product out of packaging with props

Product with model

Outdoors

Indoors

Model applying product

Model holding packaging

Model w/ face cropped from image

Model with face in view

Smiling models

Serious models

Model only

Facial close-ups

Hand close-ups

Smiling models

Serious models

What assets should you test first? Start with what you already have.

For your very first test, we recommend starting with your current top-performing ad and seeing what happens when you make a few slight changes using current assets. You might find that your best ad could do even better just by changing the position of the headline, the color of your background, or the facial expression of your model.
Current top-performing ad
Current top-performing ad
Slight variations of the top-performing ad to test
Slight variations to test
In general, you’re looking to learn 1–2 things in every creative test you run on Marpipe. Try not to force too many variables into any one test. Instead, run multiple smaller tests at the same time.
Those smaller tests should contain wildly different ideas. Here are some prime examples of the types of creative elements — and a few variants of each — you could test on our platform.
Element group 1: Image
Element group 2: In-image text
Element group 3: Design
Element group 4: Out-of-image copy
Element group 5: CTA
Element group 6: Video
Element group 12: Ratio
Element group 8: App screen
Element group 9: Human
Element group 10: Product
Element group 11: Pattern
Element group 12: Ratio
Element group 13: Background Color
Element group 14: Facial expression
Remember our example hypotheses? Here they are again, this time with ideas on which variable assets could help us answer that hypothesis.
I want to see which generates more conversions: images of my product being used by a model or images of the product by itself.
Make the image placeholder your variable for your test
Swap in any images categorized as “product with model” and “product only.”
I want to see how customers react to different types of offers and discounts.
Make the in-image copy placeholder a variable for your test
Generate a number of offers and discounts to test ($30 off, 30% off, free shipping, buy one get one, etc.)
I want to see which background colors and patterns generate more leads.
Make the background color and background pattern layer variables for your test
Swap in a number of colors from your brand color palette
Generate a number of background patterns to test
I want to see how the color of the product and the call to action on the button affect my conversion rate.
Make product image and call-to-action copy variables for your test
Swap in product shots of each color
Generate a number of CTAs (buy now, shop now, shop [product name], etc.)
I want to see if the CPA of our current top-performing ad could get any lower with a few small tweaks.
Use your top-performing ad as your modular template
Choose the variables you want to try tweaking (say, image and headline)
Swap in variants of those elements to see if any of them outperform the original

To make the shift to modular, find your creative champions.

Creative teams have a superpower: understanding how to reframe customer pain points, needs, and desires into powerful pieces of commercial art. Marpipe aims to fuel that creativity by backing the best ad creative with equally powerful data. If there’s one thing creatives love more than coming up with killer headlines and visuals, it’s knowing that their instincts were right — and are now paying off big time for the brand.
But getting creative teams on board with modular design is easier said than done.
It flips the traditionally free-form process of building ad creative on its head and can take some getting used to. Find one or two creative mavericks willing to take a chance on modular design and multivariate testing. Pull them into experiments with you. You’ll be amazed at the slight but impactful variations you’ll be able to test — and the often mind-blowing insights you’ll get as a result.
Building ads at scale: templates, variants, and creative elements

Chapter Four

Building ads at scale: templates, variants, and creative elements

Marpipe automates the ad creative design process, letting you build better ads in less than half the time.
01.

Key Marpipe vocabulary

Before we jump into building ads in Marpipe, let’s define a few things.
Creative element
A single component within an ad, like an image, a headline, or a button. Creative elements fall into two categories in Marpipe.

Variable elements are ones you want to test.

Fixed elements are ones you want to keep the same across every ad variant.

Examples fo Variable elements and Fixed elements in Marpipe interface
Variant
Creative element variant
One version of a creative element. There can be multiple variants of every creative element in one multivariate test on Marpipe.
Examples of creative element variants in Marpipe interface
Ad variant
One possible combination of all creative elements. There can be tens or hundreds of ad variants in one multivariate test on Marpipe.
4 images x 2 headlines = 8 creative variants
4 images x 2 headlines = 8 creative variants
Template
The blueprint that outlines your combination of creative elements. You can have multiple templates in an experiment, however, all fixed creative elements will remain fixed in each new template. Only variable elements can be rearranged in a new way within each template.
3 different templates that contain the same variable elements
3 different templates that contain the same variable elements
Design
A full set of ad variants in a single test. Each of your individual designs can be found in the “Create” tab of the Marpipe app.
Individual designs in the 'Create' tab of the Marpipe app.
02.

The template is key.

Your ad template is the controlled container inside of which all your variables will be tested. It must be flexible enough to accommodate every creative element you want to test, and yet still make sense creatively no matter the combination of elements inside.
Marpipe has more than 130 pre-built templates for you to choose from in our library — all based on top-performing ads. Or you can build your own.
To make sure your template exhibits the look and feel of your brand, you can upload brand colors, fonts, and logos right into Marpipe to use across all your future tests.
03.

Your first ad template

Here’s a fun surprise: you’ve already built your first ad template without even knowing it. That’s because we recommend using your top-performing ad as your initial Marpipe ad template. (Hey, if it ain’t broke, why fix it?) We know it’s already working, so we can see if small tweaks can help it work even harder.
This also makes it easy to understand which elements to make fixed, which to leave as variables, and what kinds of variants to test. Starting your testing here will help you lay a solid foundation of creative intelligence to build off of moving forward.
Once you’re done editing your template, click “Add Variants” to move on.
Adding variants in the Marpipe app
04.

Adding variants

You can upload any image and video assets, and enter in-image and out-of-image copy that you want to test on Marpipe.
There are also plenty of graphic elements — shapes, buttons, and more — built right into the Marpipe library you can pull from.
To add a variant to your test, simply drag and drop an asset to the appropriate placeholder. Technically, you can add as many variants as you want, based on your testing goals and budget. But keep in mind: the more variants you add, the more ad variations you generate, the smaller the sample size you’ll have for each.
Let’s look at what this means in a practical example.
Editing templates in the Marpipe app
05.

How ad variants add up, fast

One of the biggest benefits of Marpipe is its ability to help you scale the process of building ad variants — saving your creative team tons of time and effort. Our platform generates ad variants for testing by automatically combining every possible combination of variable elements, and plugging them into appropriate placeholders set in the ad template.
Here’s some simple math to illustrate how quickly these variables add up:
3 images x 2 headlines x 2 offers = 12 ad variants
4 images x 5 headlines x 2 background colors x 3 CTAs = 120 ad variants
It’s important to keep an eye on your total number of ad variants relative to your test budget. Using the example above with 120 ad variants, let’s say you are only spending $1,000 on a 7-day test. That would only allot each ad variant around $8.
Having clarity around what you want to learn in each test will help you focus on adding the right variants, and keep the total number of versions manageable.

Chapter Five

Planning your multivariate test: budget, audience, and timing

You’ve built your template, uploaded your creative elements, and generated your ad variants. Now it’s time to put your money where your ads are.
01.

Budgeting your tests

You’ve built your template, uploaded your creative elements, and generated your ad variants. Now it’s time to put your money where your ads are.
Your overall ad creative testing budget will determine 1) how long you run your test and 2) your budget per ad group. Before we dive into the math, let’s lay out our constants.
Marpipe places every ad variant into its own ad set — each with its own equal budget. This prevents Facebook’s algorithm from automatically favoring a variant.
Marpipe lets you run either 7-day or 14-day tests. For simplicity’s sake, let’s say that nets out to either 52 7-day tests per year each using 1/52 of your testing budget (meaning fewer variables and ad variants) or 26 test 14-day tests per year each using 1/26 of your testing budget (meaning more variables and ad variants). The more budget per test, the more variables per test.
Budgeting 1x your average CPA per ad variant per test is a minimum that will allow you to establish a baseline of creative intelligence. However, the likelihood of a successful and meaningful test increases as your budget increases. Budgeting 2x your average CPA per ad variant per test ensures that each ad variant receives at least two conversions. This allows you to collect more valuable data on the variants tested.
Marpipe will show you how your budget breaks down before you launch. So if you wind up creating more variants than you have testing budget for, you can simply remove elements and shelve them for a future test. (Vice versa, if you find you haven’t included enough variants to hit your allotted testing budget, you can add variables until you do.)

Calculate your per-test budget

$
Calculate my per-test budget
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
You should run:
52 weekly 7-day tests, each with a budget of $0
or 26 bi-weekly 14-day tests, each with a budget of $0
*This is just a ballpark estimate. Marpipe helps every customer determine the best testing budget for their needs during the onboarding process.
02.

Choosing your audience

Another variable to experiment with is who sees your ads. Marpipe connects directly to your Facebook Ads Manager account and uses Saved Audiences as your targeting. You can choose one or more of your Saved Audiences to run your test in front of.
Winning and losing ads and creative elements vary by audience. You may find that certain ads and creative elements cause certain audiences to convert at a higher rate than others.
We recommend starting with audiences that already perform well. Running your first tests in front of your top-performing audiences — those you already know are likely to engage and convert — is a smart way to collect a solid foundation of data on Marpipe.
Each chosen audience is a separate testing variable in Marpipe. Meaning, every audience added to the test becomes a multiplier of your ad variants. Keep this in mind as you structure your test against your per-test budget.
Example: 3 images x 2 headlines x 2 offers x 3 audiences = 36 ad variants
We recommend a minimum total audience size of 250,000 for tests with 10 ad variants or less. For tests with more than 10 ad variants, you should increase your audience size to at least 1 million to avoid audience overlap.
03.

Test length and cadence

There are two test lengths in Marpipe: 7 days and 14 days. Your ad creative testing goals and overall testing budget will likely determine the length of your tests, but you don’t have to stick to just one test length exclusively.
You may decide to run one or the other depending on the test scenario. Here’s a quick comparison of the two:
7-day test
14-day test

Typical cadence

Weekly (though some Marpipe customers with larger testing budgets run two 7-day tests per week)

Bi-weekly

Budget

Smaller (typically 1/52 of your overall ad creative testing budget)

Larger (typically 1/26 of your overall ad creative testing budget)

Volume of variants

Fewer variables and ad variants (smaller budget = fewer variables)

More variables and variants (larger budget = more variables)

Best for

Collecting quick creative intelligence on incremental changes

Finding the best version of an ad that’s already working

Testing net-new creative ideas

Testing ad creative that must perform during a finite window of time (product launch, holiday ads, etc.)

Chapter Six

Launching your creative test: deploying creative and knowing when (and how) to optimize

Your ad variants are generated, and your budget and test length are set. It’s go time. Here’s how to launch your test, make any necessary mid-test adjustments, and scale winning ads when you find them.
01.

Launching your test, step-by-step

Graphic art showing five steps in the test launch process
Step 1: Click on “Launch.” You can choose an already-existing campaign or create a new one.
Step 2: When creating a new campaign, you first need to confirm which campaign type you’d like to run: Conversions, Leads, Traffic, or Reach. Based on which campaign type you choose, you may need to confirm things like which pixel action you will optimize for.
Step 3: Next, you’ll be taken to the setup screen. This is where you’ll choose your audience target, budget, and length of test. This is also where you’ll choose your “out-of-image” ad copy (the text that appears above and below your visual in the Facebook Ad unit itself). You can also input any utms you’d like appended to your final URLs.
*NOTE: Out-of-image ad copy will also create more ad variants. This is another place to make sure you don’t go overboard by testing too many variables against your budget.
Step 4: On the left side, you will see a screen that outlines a summary of your test, including number of variants, the budget, test length you’ve chosen, number of audiences, and out-of-image copy variants. This is a good final check of whether your number of variables and variants align well with your budget.
Step 5: Launch your test. It will appear in Facebook as paused — this is totally normal. Your test will automatically activate once Facebook approves your creative, so you do not need to set it live on Facebook.
Once a test has launched, you may see it labeled in Marpipe’s interface as one of four ways:
Queued: When you first launch your test in Marpipe, the campaign will be set to “off” in Facebook Ads Manager. You do not need to change this. Once the ads make it through Facebook’s creative approval process, the campaign will automatically activate. In Marpipe, you will see these not-yet-launched campaigns as “Queued.”
Live: Tests that have been approved by Facebook and are now active.
Complete: Tests that have run their full 7- or 14-day course.
Error: Tests that did not launch correctly. Below are some common errors.
Error
Translation

Permissions error: Facebook Business Tools Terms Not Accepted

You have not accepted terms and conditions for certain custom audiences in Facebook Business Manager.

Input your ACCOUNT_ID and BUSINESS_ID into the URL below to accept the terms and conditions.

https://business.facebook.com/
customaudiences/value_based/tos.php?act=ACCOUNT_ID&business_id=BUSINESS_ID

Invalid parameter: Custom Age Selection Is Not Available

For advertisers running with a Special Ad Category, there are limitations on demographic targeting, such as choosing age ranges.

Try using a different audience or removing demographic parameters from Facebook Audiences Manager

Application does not have permission for this action: No permission to access this profile.

The most likely cause is that Marpipe was not granted full access to Facebook within Marpipe Settings.

Sign out of Facebook from Marpipe and sign in again. Accept all of the permissions when prompted during the sign-in process.

Permissions error: Permission Error

This vague error message has been known to pop up when your account is disabled due to out-of-date billing information.

Permissions error: Ad Account Has No Access To Instagram Account

Within Facebook, link your business Instagram page to the Ad Account in Business Settings under Instagram Accounts > Connected Assets.

Once a test is live, they will run for the full length of time you designated.
If you want to turn a test off prior to its end date, you’ll need to do so within Marpipe. Turning off a test in Facebook Ad Manager will be overridden by your Marpipe settings. Marpipe will turn it back on because it believes it should still be running.
02.

Launch FAQ

Can I schedule tests ahead of time?

Yes! On vacation, over the holidays, at 3:27 in the morning —  now you can run creative tests any of those times by queuing them up beforehand. Just choose a future start date for your test during the launch setup.

Why is each individual ad placed into its own ad set?

Marpipe places every ad variant into its own ad set with its own equal budget to prevent Facebook’s algorithm from automatically favoring a variant and skewing the test results. This is Marpipe’s way of controlling yet another variable in your test to give you the most valid creative intelligence possible.

Will my data be significant?

Your data may not reach statistical significance, so it’s important to pay attention to how your budget nets out at an ad-set level. Marpipe’s Confidence Meter visually delivers this indication. Green indicates a high level of confidence while red indicates a low level of confidence. It’s better to test specific elements in smaller batches so that each ad is allocated enough budget to reach a higher level of confidence.

Do I need to exit Facebook Learning Mode (a.k.a. reach 50 conversions) to achieve significant test results?

Nope! Because multivariate testing involves testing a large number of ads at once, reaching 50 conversions is nearly impossible with most ad-testing budgets.
The good news is that not reaching stat sig does not render multivariate testing unreliable. It just means we have to look at early indicators of success rather than stat sig to help us make quick decisions about which ads and creative elements are performing and which ones are not. (See our section on stat sig for more.)
03.

Optimizing your tests

At some point in your test — maybe even early on — you’ll start to see winners and losers emerge, in terms of both ads and creative elements. Here are some ideas on how to optimize your tests based on early results and your testing goals.

Mid-test adjustments

Mid-test adjustments are a double-edged sword. On one hand, it means you’ll end up with uneven data in terms of spend, reach, and impressions across assets. However, you’re likely to get more results.
If your goal is data collection, we don’t suggest adjusting during a test as it will skew your insights. But if your testing goals are strongly focused on KPIs, mid-test adjustments might be a smart lever for you to pull.
Here are three you can try:

Increasing budgets

As data starts to roll in, increase budgets on ad sets capturing more conversions (as you would with normal optimization). This will result in uneven spend when it comes to the asset results in Marpipe’s Intelligence, but it will likely lead to more test results.

Pausing poor-performing ads

You can begin pausing poor-performing ads and either leave top performers on or increase budgets to make up for those you pause. Again, the data will be uneven in terms of spend, but you’ll likely capture more test results.

Pausing an entire test that isn’t working (and what to do next)

If you aren’t seeing significant results and don’t think you’ll have enough data to make informed decisions, it doesn’t hurt to go into Marpipe and pause your test.
After pausing, take a look at your asset data in Marpipe’s Intelligence and use any data you can glean there to start a new test. You can simply clone your experiment in the Create section, remove poor-performing variants, and maybe add a few new ones. Or you could come up with an entirely new test idea if you don’t think the creative is working.

Scaling winning ads

Congrats, kudos, and bully for you — you found a winning ad! Now it’s time to make it work for you as hard as possible. There are three main ways to scale your top performers: duplicate them, reactivate them, or both.

Duplicating winners

One way to scale winning ads is to duplicate them outside of Marpipe directly into your current evergreen or scaling campaign on Facebook. Because this campaign is usually optimized with the exact audiences and budgets you know are already working, it’s typically a good bet that your Marpipe winners will perform well here, too.

Reactivating winners

Reactivating winning ads directly inside Marpipe test campaign is another great option because, over the course of your test, those ads have been “learning” — meaning Facebook has collected data on who will convert from that ad set and ad. When a campaign/ad set/ad combo has learned a decent amount (more conversions = more learnings) they have an even higher likelihood of converting because Facebook will continue to find people that are likely to convert from that combo.

Duplicating and reactivating

The best of both worlds. This option puts your winning ads in front of your top-performing audiences and potential new customers Facebook deems likely to convert.

Chapter Seven

I ran my first test (Yay!) Now what?

Huzzah — your first test is complete! Which means you’re probably wondering what to do next. Here, we’ll explore some ideas on how to use your data to build even better ad creative, continue finding winning ads, and deepen your brand’s pool of creative intelligence.
01.

Deeper learning vs. wider learning

What you test next largely depends on what you learned in your first test. We find that most test results push customers to test in one of two directions: deeper or wider.

Deeper learning

If your test results show clear creative outliers — ads and creative elements that plainly outperformed the others — you want to go deeper. Keep probing variants of those elements.
Next step: Test a subset of your winning variant or variants.

Wider learning

If your test results in no clear winners or losers, you want to go wider. Try something totally different — see if this new direction can help you identify any creative outliers.
Next step: Test a completely new template and all new variants.

Deeper learning is the ultimate goal

It’s OK if you don’t end up with any winning ads every once in a while. It happens. Keep trying new ideas until creative outliers present themselves, then go deep into testing variants of those outliers.
02.

Evolving your hypotheses

Using your newfound creative intelligence, it’s time to start thinking about your next test and what you want to learn from it.
This, right here, is the beauty of Marpipe: applying insights over and over to scale better tests and smarter ad creative moving forward.
Remember our example hypotheses? Here they are again, paired with a new hypothesis based on hypothetical multivariate test results.
I want to see which generates more conversions: images of my product being used by a model or images of the product by itself.
Test results showed that products being used by a model overwhelmingly outperformed images of the product by itself.
New hypothesis: I want to see what happens when I test images of my product being used by a model whose face can be seen in the image against images of my product being used by a model whose face cannot be seen in the image.
I want to see how customers react to different types of offers and discounts.
Test results showed that “$30 off” caused customers to convert most often.
New hypothesis: I want to see if the placement of the offer in the ad (left side, right side, top, bottom, etc.) can further increase conversion rates.
I want to see which background colors and patterns generate more leads.
Test results showed that blue and green backgrounds generated the most leads.
New hypothesis: I want to find see if different shades of blue and green can further increase lead generation
I want to see how the color of the product and the call to action on the button affect my conversion rate.
Test results showed that conversion rates were highest when the product was shown in green and the CTA read “Shop the sale”
New hypothesis: I want to see if new background patterns behind those two top-performing creative elements can boost conversion even further.
I want to see if the CPA of our current top-performing ad could get any lower with a few small tweaks.
Test results showed that simply changing the font of the headline boosted performance
New hypothesis: I want to see what happens to ad performance when I use the winning font on other ad variants

Are you crazy...

about catalog ads? You’re not alone. Join over 8,000 other marketers in The Catalog Cult - the world’s best newsletter about catalog ads.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
03
03