Digital MarketingEmail MarketingTesting: Lost Revenue That Is Priceless

Testing: Lost Revenue That Is Priceless

Testing may seem pointless at times. But it's a long-term investment in your e-mail marketing program.

“The problem with testing is that I lose revenue. Whichever group received the message that lost the test generates less revenue than the group that received the message which won the test. So every time I test, I am not maximizing ROI.”

So said an attendee at a half-day workshop I recently led on e-mail marketing. She isn’t alone. Many marketers are under a lot of pressure to maximize sales in this tough economy. And when you test, you’ll have a winner — and a loser, with lagging performance.

She’s obviously experienced something I’ve seen with clients. Upper management looks at the results of a test and, rather than being excited about the lift generated by the winning segment, get frustrated by the additional money, traffic, or other measure of success not garnered from the losing group.

There’s a feeling that everyone “knew” which version (usually the test) was better. And as such, why did we bother to test? What a waste of time — sending the more effective (test) version to the entire list would have maximized revenue.

But here’s the thing: you never really know what’s going to work before you test it with a live audience. Hindsight is 20/20. While 90 percent or more of the tests I do for clients result in a lift in ROI (define), there are some, here and there, that don’t. Nobody’s perfect.

Some of those failed tests were things I thought, based on past experience, would improve results, but didn’t. A few were things I felt certain would increase performance, because I’d had success using them with other clients. But for whatever reason, they just didn’t work.

Some of the tests that worked amazingly well were things that I expected would boost response just a bit. Sometimes the simplest things can produce the greatest results — there are times when I’ve projected a 3 percent lift and received a 100 percent increase in response, which is amazing.

But the better the winning version does, the worse the losing version looks and the more money you’ve “left on the table.” It’s a double-edged sword.

Full disclosure: testing is one of my favorite things. I’m a numbers person; I like to have quantifiable proof that something worked. These figures play well in presentations, like the workshop I led in Las Vegas, as well as in proposals to attract new clients.

But it’s more than that — a well-implemented A/B split or multivariate test provides solid support that the change being made is the right one. It also weeds out external factors that could inaccurately influence your results from one send to another. Deliverability issues are the big one here (if one send has higher deliverability than the other it can impact results), but even things like the day of the week and time of day can skew results.

As the holidays approach, timing becomes even more critical. People’s e-mail habits change around the holidays, as people may be in the office less (B2B) (define) and checking e-mail either more or less frequently (B2C) (define) — so e-mail results during this period may be better or worse than during more “normal” times.

If you send a “new” version of an e-mail during this time, your results will be influenced by the holiday — and the lift or drop you see compared to previous sends may not hold up in the future. This makes A/B split testing even more important this time of year.

The key with testing is to look at it as a long-term investment in your e-mail marketing program. It’s not just about the additional monies that were generated from the winning version with the single test send — it’s about the future. It’s about the ongoing lift in revenue that will be recognized as the winning version becomes the control and is rolled out to the entire list.

It’s also about gathering good data on what has the most impact on your list. Gathering statistically significant metrics from A/B split tests can give you a feel for which types of testing yield the best results. Then you can use this information to build a testing plan for 2010 and beyond.

Until next time,

Jeanne

Related Articles

What does the future hold for email? We asked our readers

Email What does the future hold for email? We asked our readers

2m Rebecca Sentance
Round-up: The Future of Email

Email Round-up: The Future of Email

2m Rebecca Sentance
How these 11 brands are nailing cart abandonment emails

Email How these 11 brands are nailing cart abandonment emails

2m Tereza Litsa
How fashion brand Thread is delivering hyper-personalized emails at scale

AI How fashion brand Thread is delivering hyper-personalized emails at scale

2m Chris Camps
How rich media can bring your emails to life

Email How rich media can bring your emails to life

2m Clark Boyd
Inbox innovation: The tools and technology powering the future of email

Advanced Email Marketing Inbox innovation: The tools and technology powering the future of email

2m Chris Camps
4 ways to make sure your email technology is mobile optimized

Email 4 ways to make sure your email technology is mobile optimized

2m Rebecca Sentance
Do brands still need bulk email software?

Email Do brands still need bulk email software?

2m Al Roberts