One of the fundamental goals of marketing is to get customers to perform a specific action, specifically buying or using a given product or service. That’s why it’s critical for marketers to know if their marketing efforts worked — in other words, did they drive people to the desired action?
Say Johnny passes a billboard for a Reddi Burger breakfast sandwich on his way to work, gets to work and sees his coworker munching on the same sandwich, and later that afternoon gets an email with a coupon for Reddi Burger. The following day, Johnny wakes up and orders a Reddi Burger breakfast sandwich to start his day.
What drove Johnny to buy the breakfast sandwich? Was it the email, the billboard, or seeing his colleague enjoy it? Of course, it could have been a combination of all three. Luckily, we live in an age where millions of data points are collected every second. This data allows us to support our marketing decisions and figure out how our customers like to be reached and the best approach to take with them.
A good way to gauge the effectiveness of marketing efforts is through testing. If done correctly, it confirms if the hours of creative briefing, copy edits, and email builds are making a difference.
Share of revenue or share of product
Why and when? Revenue tests are low-risk and the easiest to implement. They’re best used in cases when marketers don’t want to chance missing revenue goals, are not yet ready to overhaul strategy, or are looking for insight without too much overhead. Think of this as the first baby step before establishing a robust testing methodology.
How? The beauty of this test is that you don’t need to change anything about the campaign you were planning on running — you just need to be able to create mutually exclusive segments within the audience you’re targeting.
For example, say you want to send an email about spring jackets to your entire email-engaged audience. The goal is to understand which section of your audience brings in the highest revenue. This allows you to apply more personalization in the future and improve conversion. To find the desired segment, set up sub-audiences within the engaged audience of people who have previously purchased jackets, those who haven’t purchased but have a predicted affinity, and everyone else.
Keep track of the revenue that these sub-audiences bring in at the end of the campaign. If over 80% of the revenue share falls within the jacket buyers and jacket affinity groups, that’s a great reason to only target them next time you have a similar product for sale.
Control/treatment testing
Why and when? When marketers want to test something fundamentally new or gauge how changes in a campaign were received, a control and treatment format is a great option. This type of testing borrows from scientific and statistical methods and gives results focused on incremental behavior changes. Control testing is sometimes conflated with A/B testing, and while they’re certainly similar, the difference is that, as the name states, there must be a control group in control testing, unlike A/B testing.
How? The key to measuring campaign incrementality or impact is the holdout control group: a subset of the targeted customer segment that does not receive the given campaign. The users assigned to the control group are statistically identical to those targeted. The only thing different about them is that they don’t receive messaging about the campaign. By tracking the behavior of these groups, you can understand what behavior was driven by the campaign, compared to what customers would have done anyway.
If you want to launch a new churn prevention campaign, your aim is to clearly understand the impact of the new email on customers who have churned. The control group, say 50% of your segment, serves as your baseline and will not receive any emails, and the other half of the audience will. Then, when the results roll in, you can review and compare key metrics such as revenue per user and conversion rate to see which performed best.
Control/treatment testing 2.0
A curation testing structure can also be used to understand the holy grail of marketing: personalization. Send a personalized email on new spring nail polish colors to 50% of your target audience and a generic announcement email to the other half. You can see right away if the personalization improved the conversion or revenue for the campaign.
You can up the ante with A/B testing by layering it with a reliable control testing structure. Do this by randomly breaking up an audience into three groups — the control group, A group, and B group.
Then think of what you’d like to test — perhaps two new pieces of creative. By including the control group here, you can make sure you have a clear understanding of how customers would behave if you didn’t continue to target them — in other words, you can see how they would have behaved regardless, and you also learn how effective each piece of new creative is.
Solid foundation = stellar results
No matter how many tests you run or what kind, none of the results will be accurate or reliable if you’re not beginning with a solid base of customer data. The right customer data tools will provide you with the perfect foundation to run a number of tests, allowing you to test the success rate of new campaigns, the effects of personalization, and much more.
Learn more about the importance of personalization in testing and how to use it to boost revenue in our guide.