Testing: Why Failure Must Be an Option

Several years ago I was speaking at a marketing conference and had to follow NASA icons Gene Kranz and Jim Lovell, from the historic Apollo 13 mission that never landed on the moon. Mr. Kranz repeatedly used the phrase “failure is not an option” when describing his days running mission control and most certainly the story of Apollo 13. “Failure is not an Option” is also the title of his book that he was promoting at the time. As I listened and contemplated how I was going to follow these two American icons and pivot the audience from exciting space stories to discussing the need for relevant marketing, it occurred to me that; “failure must be an option.” So I got up to the stage and reminded the audience that I was from Jupiter and knew nothing about bringing men back safely from the moon, however, that for us as marketers, we are one of the few professions where failure is not only tolerated, but actually is a necessary ingredient to optimizing programs. Failure is an element of testing; even simple A/B tests are predicated on the notion that one part will outperform (succeed) the other part, which is a degree of failure.

At my firm, we do a lot of survey work and market research, and recently we completed a survey that had some staggering findings. In a survey last month of 368 U.S. marketing executives, we asked them to identify marketing tactics that they regularly utilize (at least once a month). The data revealed that just 31 percent of the executive respondents stated that they conduct A/B testing for their email marketing campaigns. When we asked about more sophisticated types of testing methods, those results tracked even lower. Perhaps the real failure here is that so few email marketing executives are using testing as a scientific tactic to optimizing mailings; instead, most are simply guessing about which elements would optimize the performance of their mailings.

Granted, employing testing does increase production time and often requires more labor resources, but embarking on a new segmentation scheme without properly testing it is a futile exercise that wastes resources. Additionally, I did a research study way back in 2004 that proved that email marketers that were regularly testing had higher conversion rates than those that did not. Testing can make a real difference to your program and here are some suggestions on how to get started with the basics.

  • Test only one element or variable at a time. For example, test different offers and test different creative formats, but do not test both at the same time. This will help to determine which changes are driving the campaign’s performance.
  • Expose customers to tests at the same time. In order to reduce the number of variables, execute tests on the same day, in the same mail stream. This will account for any day-of-week response fluctuations. Alternatively, you will likely want to test day-of-week performance, but as with the aforementioned note, you shouldn’t also be testing subject line length. Again, testing is identifying which variable is making the most difference, and to start, keep it simple; test one element at a time.
  • Tests must be measurable and statistically relevant. While it’s easier to create accurate samples in a 50/50 A/B split, marketers employing percent or nth testing must ensure that the test cells are evenly split and large enough to be more than directional anecdotal data. Many ESPs suggest that each test cell should result in at least 100 responses, requiring marketers to back into their desired results sample after taking into account the likely response rate. A general rule is to allocate at least 5 percent of the total list for testing.
  • Maintain a control group. For comparison purposes, create and maintain a control group that does not get exposed to any of the variables being tested. Such an unadulterated group will allow test results to be contrasted to this static group over time.
  • Get help. Working with a marketing service provider, agency, or consultant that has experience in different testing methodologies will reduce your knowledge gap with testing and speed your time to realize the optimized mailing. There are several great resources and books that provide much greater detail on testing best practices, so be sure to make use of them.

While email marketing is not rocket science, marketing does necessitate some science. Be thankful that you are a marketer and embrace that notion of optimization that a little bit of failure is a necessary part of the practice of scientific marketing. Better to test a small bit of your list and experience some small failure, then launch a mailing to your entire list and experience failure in a massive way. Test early and test often and make it an element of your production process.

Until next time, keep on mailing.

David is off today. This column was originally published on June 20, 2011 on ClickZ.

Related reading