Sometimes it's difficult to figure out what to test to improve performance. But sometimes the hard part comes after the test, in determining the winner.
Sometimes it's difficult to figure out what to test to improve performance. But sometimes the hard part comes after the test, in determining the winner. Case in point: a recent subject line test I undertook for a client.
The publication is an email newsletter, part of a paid subscription to a website. Only paying subscribers receive the newsletter; its purpose is to make readers aware of content that might be of interest to them on the site and motivate them to click through and view it.
So the goal of this newsletter is to get people to click through; we're not looking for revenue to be generated directly from the email. There's a feeling that people who click through on a regular basis are more likely to renew their subscriptions. It's logical, although I've not seen any hard data to support it for this product. So while the goal of the newsletter isn't to generate revenue, the belief is that by getting people to engage with the newsletter we are increasing the likelihood that they will renew.
This round of testing involved the subject line. The control subject line was "Subscriber Newsletter." The formula we used to generate the test subject lines was to include a key theme unique to each issue of the newsletter in the first 25 characters of the subject line. The hypothesis is that by featuring a key theme prominently in the subject line we'll increase opens and, as a result, clicks to the site.
Notice that I said "test subject lines" - with the lines being plural. Because of the list size (roughly 20,000 total), we need to test over two or three sends in order to get results that are statistically significant. Then we compile the overall results to determine the winner of the test.
So how'd we do? Here's where the debate begins…
The control subject line generated higher open and unique click-through rates. But the test subject line bested the control in total click-through rate as well as both unique and total click-to-open rates.
Let me quickly explain the difference between the four click-through metrics we're looking at:
It's standard operating procedure to focus on the unique CTR and CTOR metrics, but in some instances the total CTR and CTOR should also be considered. That's the case with this email product. The management team focuses heavily on the amount of traffic driven to the site each month from the newsletter. So the total number of clicks equals the total number of visits (not unique visitors, but unique visits), and therefore we look at these.
So, which subject line, control, or test would you call the winner? Let's run down what the metrics are telling us:
One more data point, looking at raw numbers, not metrics: we mailed slightly fewer of the emails with the test subject lines (-1 percent), but those versions generated slightly more total clicks (+2 percent) than the control.
So, which would you say won: the control or the test subject line?
It comes down to a key question: are you looking to maximize website visits or unique website visitors? Do we want slightly more people, even if they are less engaged (click less), or slightly fewer people who are much more engaged (click more)?
For those within the business unit focused on maximizing website traffic, the raw data on total clicks was their key indicator - more total clicks even though fewer emails were sent. They saw the test subject line as the winner.
For those on the team focused on maximizing unique website visitors, the unique CTR told the story; they sided with the control as the winner.
So what would you do?
In the end, we stuck with the control subject line - and are now making plans for another round of subject line testing to see if we can get it all: more people opening and generating more unique and total clicks, from a subject line that is more descriptive than just "Subscriber Newsletter."
I'll keep you posted.
Until next time,
Join the Industry's Leading eCommerce & Direct Marketing Experts in Chicago
ClickZ Live Chicago (Nov 3-6) will deliver over 50 sessions across 4 days and 10 individual tracks, including Data-Driven Marketing, Social, Mobile, Display, Search and Email. Check out the full agenda and register by Friday, Oct 3 to take advantage of Early Bird Rates!
Jeanne Jennings is a 20 year veteran of the online/email marketing industry, having started her career with CompuServe in the late 1980s. As Vice President of Global Strategic Services for Alchemy Worx, Jennings helps organizations become more effective and more profitable online. Previously Jennings ran her own email marketing consultancy with a focus on strategy; clients included AARP, Hasbro, Scholastic, Verizon and Weight Watchers International. Want to learn more? Check out her blog.
IBM Social Analytics: The Science Behind Social Media Marketing
80% of internet users say they prefer to connect with brands via Facebook. 65% of social media users say they use it to learn more about brands, products and services. Learn about how to find more about customers' attitudes, preferences and buying habits from what they say on social media channels.
An Introduction to Marketing Attribution: Selecting the Right Model for Search, Display & Social Advertising
If you're considering implementing a marketing attribution model to measure and optimize your programs, this paper is a great introduction. It also includes real-life tips from marketers who have successfully implemented attribution in their organizations.
September 17, 2014
September 23, 2014
September 30, 2014
1:00pm ET/10:00am PT