Test Versus Control, Part 1

Our company advises a variety of clients on email marketing matters. When we speak with prospects for the first time, we are often asked to enlighten them by revealing the “best” way to send email. The simple answer is: Utilize personalization and customization as much and as frequently as possible to ensure information delivered to customers is relevant and useful.

That usually fails to cut the mustard. Prospects push on. They want specific tactics.

Questions we receive frequently include:

  • Are customers more likely to open an email if the sender field is a company or an individual?

  • Do customers prefer newsletters with a left-hand or right-hand rail?
  • HTML or text: Which generates better results?

As a rule, I support sending email “from” individuals, using a right-hand rail for newsletters in an HTML format (recipients should have the option of receiving text). That said, rules are made to be broken and probably best ignored.

The correct answer to all email marketing questions is: Test it. Employ test versus control methodology for definitive answers to tactical questions. Prospects are never particularly fond of that response. Prospects like certainty. Prospects like consistency. Prospects like absolutes. In email marketing, these don’t exist.

Different audiences respond differently. If I were to send the same email campaign to a group of experienced tech professionals and a sampling of Internet newbies, results will be vastly disparate. Different open rates. Different click-through rates (CTRs). Different conversion rates. Different unsubscribe rates. Prospects who serve experienced technology professionals want to approach their market differently than clients targeting newbies. Precisely how the approach should differ can only be determined with testing.

Test versus control methodology analyzes observed customer data generated through actual campaigns to determine the most effective tactic for achieving the desired result. Test versus control is achieved by segmenting the audience into a minimum of two parts. Each segment receives an identical email, except for a single variable. Observed customer data is collected and analyzed to determine which audience segment took the desired action most frequently.

Let’s look at test versus control with respect to an e-newsletter. Our client’s newsletter provides customers with valuable, relevant information. At the same time, it makes a promotional offer in a right-hand rail. It draws prospects with information to expose them to the offer.

Each issue is treated as an opportunity to optimize future editions. We’re never satisfied. We seek improvement. With the upcoming issue, our goal is to learn if a change in design will generate incremental revenue.

The newsletter consists of two columns: a body column approximately two-thirds of the width of the newsletter and a right-hand rail filling the remaining third. This is the control version. The test version also has two columns — each the same width as its double in the control version. But the test features a left-hand rail instead of a right one. The content of both rails in both versions is identical. The only change is positioning.

We divide the audience into two segments: Control and Test. The test segment is expected to be much smaller than the control segment, but it should be large enough to ensure valid results that can be compared against the control. For the purposes of this example, assume the control segment is 9,000 and the test segment 1,000. The test segment should be randomly selected from the total subscriber base to ensure the characteristics of both groups are identical. It’s important to take into account variables such as length and subscription source to ensure the two groups are equivalent.

The next part’s easy. We send the control segment the control version of the newsletter. The test segment gets the test version of the newsletter. We wait for observed data to be collected. In this instance, we’re collecting and analyzing CTRs, both total and unique, and conversion rates for the promotional offers in the right rail of the control version and left rail of the test version. Remember, the content of both is identical. If one version outperforms the other with a higher CTR or conversion rate, it’s attributable to the position of the rail. It is important to measure both these metrics. The numbers do not always move in the same direction.

After 48 hours, the results are in:

Audience Segment CTR
Conversion Rate
Control (n = 9,000) 5.2% 2.7% 0.5%
Test (n = 1,000) 8.1% 5.1% 1.0%

The test clearly outperformed the control. For the next mailing, we’ll repeat the test to validate the results. If they’re valid, the design changes.

This approach can be used to test any newsletter variable: sender address, format (HTML versus text), subject line, and so on. With our client, we use every issue of the newsletter as an opportunity to optimize future mailings. We constantly test variables seeking opportunities for improvement. It never ends — nor should it.

In part two of this series, we’ll look at test versus control with an email promotion. Email promotions differ from e-newsletters because they’re typically one-time mailings intended to sell at the moment of delivery. An email promotion shouts, “Buy This Now!” In this environment, testing ensures the message sent to the largest customer base has the greatest likelihood of success. Check back in two weeks to see how we do it.

Related reading