.png)
.png)
Catalog ads are deceptively simple.
Set up your feed, plug it into Meta, let the algorithm match products to people. What could go wrong?
A lot, apparently.
We hear the same questions during nearly every client onboarding: How much should I spend on a test? What's the right campaign structure? Should I use cost caps or let it rip?
So we went straight to the source. We asked media buyers from some of the sharpest performance agencies in the industry—Common Thread Collective, Ovative, Power Digital, and WITHIN—to answer the seven questions we hear most often.
Their answers weren't always the same. But that's the point. Catalog ads aren't a paint-by-numbers exercise, and the nuance in these responses is where the real learning lives.
Before you test anything, you need enough budget to actually learn something. Underspend and you're just generating noise.

The math: If your CPA is $40, budget at least $2,000 per ad set for the test. Go shorter than 14 days and you're reading tea leaves.
This is where most people overcomplicate things. The goal is clarity—if you can't explain what you're testing, you won't be able to interpret the results.

The principle: Isolate one variable. Test the catalog mechanic first. Layer complexity later.
This question sounds simple, but it gets at something fundamental: what are you actually trying to learn?

The insight: A/B tests tell you which version won. Lift tests tell you whether any of it mattered. Different questions, different tools.
More variants means more learning, right? Not exactly. There's a tension between exploration and statistical validity.

The discipline: Test one variable at a time. 4–5 variants. 14+ days. Anything else muddies the signal.
This one depends on where your audience sits in the funnel—and what you're optimizing for.

The framework: High intent? Send them straight to the product. Earlier in the journey? Give them room to explore.
Here's where our experts diverged—which means this is exactly where you need to think carefully about your own context.
The case for efficiency-based bidding:


The takeaway: Both approaches have logic. CTC's point about "learning the wrong lessons" is worth sitting with—if your test optimizes toward cheap-but-low-value conversions, you might scale the wrong thing. But Ovative's point about letting the algorithm explore freely during testing has merit too. Know your margins and decide accordingly.
We saved this one for last because it's the most important. Every expert has seen the same pattern.

The lesson: Start broad. Trust the algorithm more than your intuition. Narrow with intention, not anxiety.
Catalog ads aren't a set-it-and-forget-it channel. They require disciplined testing, clear structures, and the patience to let the algorithm learn.
But when they work, they work—scaling dynamically across your entire product catalog in ways that static creative simply can't match.
The media buyers we talked to manage millions in spend across some of the most sophisticated brands in DTC. Their playbook isn't complicated:
And above all: stop trying to outsmart the machine. Give it room to learn, and it'll learn.
