My last column discussed benchmarks and how to make sure the ones you are using are reliable. This time, I'm going to talk about how – and how not – to use benchmarks.
Much flack against benchmarks should really be directed at how people use them. Benchmarks themselves aren't bad, but how people use them can be. There are right and wrong ways to use them.
Not: "Best Day to Send" Studies
Interesting to read, they provide great ideas for days to test against each other, but something you blindly implement in your own e-mail campaigns? Never. If you test and find a day or time that gets you better results, by all means, go for it. Shift your sends there. The idea there's one perfect time for everyone to send to optimize results? Phooey.
Not: Missing benchmarks = Failure
Again, phooey. Plenty of e-mail campaigns generate revenue and meet revenue goals without hitting the "industry average" marks for opens, clicks and conversions. And that's what matters. If the goal is revenue generation and you meet your goal, you're a success. Benchmarks can, however, give you something to shoot for, a "stretch goal" for already profitable campaigns.
Not: Meeting benchmarks = Success
Not always. Meeting benchmarks means you hit an industry average. No more, no less. Develop your own internal goals, based on your business interests, and judge your campaign on those. Even campaigns that hit industry benchmarks for opens, clicks and conversions can lose money. It's not always true that you'll make it up on volume.
True story: I was working with one division of a large corporation. I got a call from a person in another division asking for my help. She'd been tasked with providing projections for a series of upcoming e-mail initiatives. My colleague confessed she had no idea where to start.
I asked about current metrics. There weren't any; the system they'd been using for sends provided no tracking or reporting. They had no idea how previous sends were done.
This is a place where benchmarks are invaluable. We discussed some recent industry benchmarks. I also shared figures we were seeing in the division I worked with. She seemed surprised by the numbers. Using these two data sets, we came to some realistic estimates for her new campaign.
This quick discussion probably saved her from projecting open rates above 50 percent, with click-throughs in the double digits -- something that's highly unlikely, and something which would have caused her pain when asked for an explanation after the fact. And if her campaigns do perform as well as she originally thought they might (without knowing how e-mail campaigns perform in general), she'll be a hero based on our conservative estimates.
One benefit of tracking and reporting is you can see how recipients interact with your e-mail. This allows you to pinpoint areas where you may be able to improve the experience, and keep more readers engaged. Benchmarks can help you determine where you stand the best chance of improving your e-mail's performance.
|Metric||Actual A||Actual B||Actual C||Benchmark*|
|* Business Products and Services, DoubleClick Q3 2005 E-mail Trend Report|
In the table above, you see three different examples of "actual" e-mail metrics. By comparing performance to the benchmark, you can identify areas where improvement is most likely to occur.
Note I said that the benchmarks highlight areas where improvement is most likely to occur, not where it actually will occur. These areas are the low-hanging fruit. They should be easier to lift than other areas that are performing more in line with metrics. Rather than spend resources (in the case of "Actual A") trying to improve your open rate (which is already above average at 26.2 percent), you're better off working on improving deliverability (which is below average).
Yes: Identify Macro-environmental Trends
There are outside forces that influence e-mail metrics. If you're seeing a decline in open rates, it may be your list or something you're doing, or it may be an increase in inbox traffic that's affecting all senders. Benchmarks can help you figure that out. If your open rates are declining an average of 2 percent and the benchmarks show something similar, it may not be anything you're doing. There may not be anything you can do about it. Benchmarks can help you learn what to worry about, and what to figure out how to live with.
Yes: Sanity Check
People often run projections on an e-mail program looking for a certain result; a revenue-per-dollar-spent figure, or an ROI goal. That's smart. But it's important to make sure the assumptions that get you there are realistic. That's where a sanity check against benchmarks can come in handy.
|Goal||More Realistic Estimate|
|Revenue per Dollar Spent||$3.00||$1.62|
|Total Send Quantity||50,000||50,000|
|Cost of Send||$1,675||$1,675|
|Metrics||To Meet Goal||More Realistic Estimate|
|Raw Data||Metric||Raw Data||Metric|
|* Business Products and Services, DoubleClick Q3 2005 E-mail Trend Report|
Which figures do you have more confidence in, the ones you need to meet the goal, which exceed industry averages at every step of the way, or the ones based on benchmarks? It's not that the goal metrics are wrong, it's just that you'd have to make the case for how you're going to outperform industry averages. The other figures based on benchmarks are much more realistic.
I've worked with entrepreneurs whose rosy expectations of e-mail performance is quickly brought down to earth by benchmarks. Similarly, if you have a higher-up pushing for unrealistic revenue goals, provide them a "best case scenario" based on benchmarks and your own past performance. People who don't work with e-mail often have no idea of what to expect. Benchmarks are an easy way to provide the information they need to bring their expectations in line with reality.
Benchmarks: An Endangered Species
The benchmarks I've used in this article are from Q3 2005, over a year old. Why? My most trusted source for benchmarks, DoubleClick, has stopped publishing them. This happened right around the time their e-mail division was acquired by Epsilon Interactive, so I'm hoping it's just a temporary situation. But I'm starting to wonder.
Other organizations publish benchmarks. Many are small ESPs with a relatively small number of small clients. Their figures aren't really relevant to my client base. The "benchmarks are worthless" movement hasn't helped, either. Finally, it takes time and money to analyze and publish benchmarks. Fewer and fewer companies are wiling to devote the resources to it.
So use the benchmarks you can find well; don't invest them with too much power, but do let them help you. Don't listen to those who say they're "worthless." They're not, so long as you understand what they can -- and can't -- do, and if you use them correctly.
Meet Jeanne at ClickZ Specifics E-Mail Marketing, October 24-25 in New York City.
Want more e-mail marketing information? ClickZ E-Mail Reference is an archive of all our e-mail columns, organized by topic.
Meet Your Favorite ClickZ Contributors
Many of ClickZ's leading expert contributors will be at ClickZ Live, the new online and digital marketing event kicking off in New York (March 31-April 3). Hear from the likes of: Jeremy Hull, Lisa Raehsler, Andrew Goodman, Bryan Eisenberg, Mathew Sweezey, Aaron Kahlow, Stephanie Miller, Simms Jenkins, Jeanne S. Jennings, Dave Hendricks and more!
Jeanne Jennings is a 20 year veteran of the online/email marketing industry, having started her career with CompuServe in the late 1980s. As Vice President of Global Strategic Services for Alchemy Worx, Jennings helps organizations become more effective and more profitable online. Previously Jennings ran her own email marketing consultancy with a focus on strategy; clients included AARP, Hasbro, Scholastic, Verizon and Weight Watchers International. Want to learn more? Check out her blog.
March 19, 2014