My last column discussed benchmarks and how to make sure the ones you are using are reliable. This time, I’m going to talk about how – and how not – to use benchmarks.
Much flack against benchmarks should really be directed at how people use them. Benchmarks themselves aren’t bad, but how people use them can be. There are right and wrong ways to use them.
- Shifting all your sends to Mondays (or any other day of the week) when the latest study shows higher opens or click-throughs on that day.
- Considering an e-mail campaign to be a failure because it didn’t meet the industry average for opens/clicks/conversion rate/revenue-per-e-mail-sent/ or any other metric.
- Pronouncing an e-mail campaign a success because it exceeded industry averages for performance, even if it didn’t turn a profit or meet other company-specific goals.
- Using benchmarks as a stand-in for pre-send projections until you have your own metrics.
- Viewing benchmarks as tools to identify elements of your e-mail program when small changes may reap large returns.
- Viewing benchmarks as a relative measure of macro-environmental trends.
- Utilizing benchmarks as a “sanity check” for goals and expectations which may be a bit out of range.
Not: “Best Day to Send” Studies
Interesting to read, they provide great ideas for days to test against each other, but something you blindly implement in your own e-mail campaigns? Never. If you test and find a day or time that gets you better results, by all means, go for it. Shift your sends there. The idea there’s one perfect time for everyone to send to optimize results? Phooey.
Not: Missing benchmarks = Failure
Again, phooey. Plenty of e-mail campaigns generate revenue and meet revenue goals without hitting the “industry average” marks for opens, clicks and conversions. And that’s what matters. If the goal is revenue generation and you meet your goal, you’re a success. Benchmarks can, however, give you something to shoot for, a “stretch goal” for already profitable campaigns.
Not: Meeting benchmarks = Success
Not always. Meeting benchmarks means you hit an industry average. No more, no less. Develop your own internal goals, based on your business interests, and judge your campaign on those. Even campaigns that hit industry benchmarks for opens, clicks and conversions can lose money. It’s not always true that you’ll make it up on volume.
True story: I was working with one division of a large corporation. I got a call from a person in another division asking for my help. She’d been tasked with providing projections for a series of upcoming e-mail initiatives. My colleague confessed she had no idea where to start.
I asked about current metrics. There weren’t any; the system they’d been using for sends provided no tracking or reporting. They had no idea how previous sends were done.
This is a place where benchmarks are invaluable. We discussed some recent industry benchmarks. I also shared figures we were seeing in the division I worked with. She seemed surprised by the numbers. Using these two data sets, we came to some realistic estimates for her new campaign.
This quick discussion probably saved her from projecting open rates above 50 percent, with click-throughs in the double digits — something that’s highly unlikely, and something which would have caused her pain when asked for an explanation after the fact. And if her campaigns do perform as well as she originally thought they might (without knowing how e-mail campaigns perform in general), she’ll be a hero based on our conservative estimates.
One benefit of tracking and reporting is you can see how recipients interact with your e-mail. This allows you to pinpoint areas where you may be able to improve the experience, and keep more readers engaged. Benchmarks can help you determine where you stand the best chance of improving your e-mail’s performance.
|Metric||Actual A||Actual B||Actual C||Benchmark*|
|* Business Products and Services, DoubleClick Q3 2005 E-mail Trend Report|
In the table above, you see three different examples of “actual” e-mail metrics. By comparing performance to the benchmark, you can identify areas where improvement is most likely to occur.
- In “Actual A” I’d recommend trying to improve deliverability, as that’s an area of weakness.
- In “Actual B” the open rate is what appears to be pulling response down. Here I’d focus on the ‘from’ field and subject lines; the body of the email is more than pulling its weight, as indicated by the very high click-to-open ratio.
- In “Actual C” it’s the body of the e-mail I’d focus on, looking to lift click-through and click-to-open rates.
Note I said that the benchmarks highlight areas where improvement is most likely to occur, not where it actually will occur. These areas are the low-hanging fruit. They should be easier to lift than other areas that are performing more in line with metrics. Rather than spend resources (in the case of “Actual A”) trying to improve your open rate (which is already above average at 26.2 percent), you’re better off working on improving deliverability (which is below average).
Yes: Identify Macro-environmental Trends
There are outside forces that influence e-mail metrics. If you’re seeing a decline in open rates, it may be your list or something you’re doing, or it may be an increase in inbox traffic that’s affecting all senders. Benchmarks can help you figure that out. If your open rates are declining an average of 2 percent and the benchmarks show something similar, it may not be anything you’re doing. There may not be anything you can do about it. Benchmarks can help you learn what to worry about, and what to figure out how to live with.
Yes: Sanity Check
People often run projections on an e-mail program looking for a certain result; a revenue-per-dollar-spent figure, or an ROI goal. That’s smart. But it’s important to make sure the assumptions that get you there are realistic. That’s where a sanity check against benchmarks can come in handy.
|Goal||More Realistic Estimate|
|Revenue per Dollar Spent||$3.00||$1.62|
|Total Send Quantity||50,000||50,000|
|Cost of Send||$1,675||$1,675|
|Metrics||To Meet Goal||More Realistic Estimate|
|Raw Data||Metric||Raw Data||Metric|
|* Business Products and Services, DoubleClick Q3 2005 E-mail Trend Report|
Which figures do you have more confidence in, the ones you need to meet the goal, which exceed industry averages at every step of the way, or the ones based on benchmarks? It’s not that the goal metrics are wrong, it’s just that you’d have to make the case for how you’re going to outperform industry averages. The other figures based on benchmarks are much more realistic.
I’ve worked with entrepreneurs whose rosy expectations of e-mail performance is quickly brought down to earth by benchmarks. Similarly, if you have a higher-up pushing for unrealistic revenue goals, provide them a “best case scenario” based on benchmarks and your own past performance. People who don’t work with e-mail often have no idea of what to expect. Benchmarks are an easy way to provide the information they need to bring their expectations in line with reality.
Benchmarks: An Endangered Species
The benchmarks I’ve used in this article are from Q3 2005, over a year old. Why? My most trusted source for benchmarks, DoubleClick, has stopped publishing them. This happened right around the time their e-mail division was acquired by Epsilon Interactive, so I’m hoping it’s just a temporary situation. But I’m starting to wonder.
Other organizations publish benchmarks. Many are small ESPs with a relatively small number of small clients. Their figures aren’t really relevant to my client base. The “benchmarks are worthless” movement hasn’t helped, either. Finally, it takes time and money to analyze and publish benchmarks. Fewer and fewer companies are wiling to devote the resources to it.
So use the benchmarks you can find well; don’t invest them with too much power, but do let them help you. Don’t listen to those who say they’re “worthless.” They’re not, so long as you understand what they can — and can’t — do, and if you use them correctly.
Meet Jeanne at ClickZ Specifics E-Mail Marketing, October 24-25 in New York City.
Want more e-mail marketing information? ClickZ E-Mail Reference is an archive of all our e-mail columns, organized by topic.
Properly implemented DMARC should not affect your deliverability. You can guess what I’m going to say next. Last month I wrote about ... read more
Graze, the snack company which provides nutritious nibbles in slim cardboard subscription boxes, has become a regular fixture in offices, homes and ... read more
Inboxes are so crowded, how can a marketer stand out? Here are eight brands that cut through the noise with great emails. Also, we are all about alliteration.
In theory, having no DMARC record should have no impact on deliverability, but not everyone got that memo.