I’ve been working in the influencer space since 2012. Since before influencer marketing was even a term. In that time, I’ve built an influencer tech company from ideation to acquisition, been a part of a thriving growing content agency under the AT&T umbrella, and co-founded an influencer agency, Honest Social, focused on using influencers specifically for acquisition and growth. Across these companies, I’ve helped manage influencer budgets of well over $100M annually in this more than $6B industry, building influencer strategies for everyone from Warby Parker and thredUP to Estée Lauder and Coca-Cola, and everyone in between.
I know what you’re thinking, *yawn*, “cool resume, bro… what does it have to do with multivariate testing?” A) Don’t call me bro. B) A ton. The reality is that the brands who are the most effective in using influencers to move the needle are the same brands who are able to treat influencer marketing with the same rigor as their AdWords or Facebook Ad campaigns. The marketers who are consistently A/B testing in order to get the right partner delivering the right message with the right product at the right time. Partnering this rigor with the trust and authenticity of your favorite social talent is why influencer marketing delivers 11X the ROI of all other forms of digital media and 4X the EMV of paid media on average.
However, while effective, it’s also arduous. Like all marketing, it’s an investment of time and money to get the result you want. Without this willingness to test, you’re a marketer who is just jumping on the train (which makes sense, who doesn’t like trains) and going to find themselves investing a lot more time and money than necessary, without knowing why you got or how to get the results you want.
With the majority of our clients, we are approaching influencer marketing with an “always on” approach across a roster of talent to better understand what works and, ultimately, drives toward a better payback. We are constantly tweaking each month’s talent and product integrations based upon the previous months’ talent rosters, content integrations, and results. Let’s take a real world example. Early on in our Instagram integrations with one of our clients, a haircare brand, we were testing everything from the CTA to the way the product was displayed. Obviously, we had historical data from brands in similar spaces with similar goals that we could pull from, so that did shorten our initial testing cycle. But, even with industry knowledge, there are always going to be slight variations from brand to brand and product to product.
In our first campaign, we worked with dozens of talent for this particular brand and were able to identify that there was a likely link between increased engagement and talent with curly hair, but there was a higher click-through-rate with women who had wavy to straight brunette hair. So, we tested again the next month. Again, the results were consistent:
Then, we continue to do this over and over. Dozens upon dozens of talent looking for variations on everything from product messaging to “type”. All until we can build the optimal roster of brand ambassadors to help our clients reach their goals. This is multivariate testing. This continuous tweaking is the difference between hitting your numbers or not.
If you’re optimizing for brand affinity, recognition, or trying to drive a relationship with your audience, you’d be better off working with talent that can drive a higher engagement for your brand. However, if payback is the goal, then you’d want to optimize against site visits and purchases. In the example here, we were able to improve traffic by nearly a full percentage point almost immediately (and continued to improve upon this with every iteration). That would mean that if you have 1 million viewers of your influencer content, that 1% improvement is an extra 10,000 potential consumers to your site. This is why testing, and understanding the tests, matters.
However, in the influencer game it’s a costly, slightly more tedious experience. When you partner with talent, you have to partner with many different talent speaking to a large enough audience in order to get statistical significance. Talent integrations can cost anywhere from a few hundred bucks to a few hundred thousand. Then you have to wait for the content creation process (dependent upon format, platform, and production quality can take as long as a month). Then you have to wait for the performance results (again, days to months) in order to make appropriate assumptions on what is and isn’t working. Completely worth it. Because when the influencer marketing engine works, you’d be hard pressed to find anything that works better (nearly 90% of marketers think influencer marketing is equal or better than all other channels).
But, this got me thinking – What would it look like if you were able to run a solution similar to Marpipe’s where you’re able to use the cost efficiencies and pacing of Facebook to gather as much information as possible, better determining the talent you’re going to work with, while limiting the testing phase and cost? We could treat Facebook less as a marketing channel and more as an inexpensive data engine. Creating literal thousands of iterations of your ideal talent.
The medium is different, but the audiences are the same. And Marpipe has proven this works in other avenues. For example, the work done with a Fortune 500 company helped them to better determine what in-store display setup for a subsidiary brand was likely to drive the highest potential for purchase and they did this by efficiently running thousands of ad variations on Facebook.
I’m not sure what this looks like for influencer marketing (yet), but I’m excited to begin testing this out with some of my clients. If we can increase the talent type variations from 10-50 up to 100-1000 and decrease the learning curve from two months to two weeks, all while spending less to do so, then there is the potential to change the entire influencer marketing landscape. Stay tuned to see how a test against this plays out with one of my clients. Or if you’re interested in testing this, let me know.