Industry Delivery Benchmarks Are Meaningless
Drill down to collect and analyze these seven internal metrics to assess e-mail campaign performance.
Drill down to collect and analyze these seven internal metrics to assess e-mail campaign performance.
Whenever I talk with a client about metrics, the conversation often turns to benchmarks: “We’re doing great because our average open rate is 5 percent higher than the industry benchmark.” Or, “What’s the average click rate?”
Given the metrics that e-mail generates so effortlessly, I understand why people want to compare their programs against some industry standard to assess how they’re doing.
Problem is, general industry benchmarks don’t accurately reflect your program’s performance. If your e-mail program doesn’t match the benchmarked population for list acquisition and quality, expectation setting at opt-in, content type, frequency, and delivery methods, the numbers are useless for anything more than a passing comparison.
We’ve seen, for example, that permission-based house lists deliver better results than third-party rentals with murky permission history. You might think, then, you’re doing pretty well if your newsletter’s 30 percent open rate beats some industry-reported number of 20 percent for business-to-business (B2B) promotional mailings.
But if that benchmark mixes opt-ins with opt-outs, one-offs with newsletters, biweeklies with semiweekly mailings, you probably aren’t doing as well as you thought.
Drill Down for Meaningful Statistics
How does that 30 percent compare to your performance over the last 10 to 20 campaigns? You might find your open rates fall by 2 to 3 percentage points each campaign.
That’s a serious problem measuring by industry benchmark won’t help you solve. Once you know your metric trends, however, you can work backwards to find out what’s causing the problem and how to improve.
Even benchmark reports from delivery service providers (DSPs), such as EmailAdvisor, Pivotal Veracity, Return Path, and Habeas, and e-mail-service providers (ESPs) aren’t absolutes. Applications can vary in the way they collect, measure, and report results.
For example, DSPs use seed addresses as a major tool to monitor delivery results. This means they take a sample and apply results, the way you would with a survey. This method is useful and can identify many problems, but it shouldn’t be considered an absolute number.
Does the application or ESP define “delivered e-mail” as e-mail delivered only to the inbox? Or does it include e-mail delivered to the bulk folder as well? You can see where differing definitions can change the number you use to compare among services or how you judge your own program.
Just to muddy the waters even more, even ESPs can have small variations in how they report delivery. Most count delivered as sent messages minus bounced messages. While this is a good definition, in rare cases messages aren’t delivered to the recipient’s inbox or junk folder and no bounce message is returned. This creates a falsely high delivered rate.
The Numbers to Look At
I like to look at trend numbers from the following basic campaign metrics in unison to alert me to problems (I also try view the same data across the top 10 domains to help highlight changes quickly so I can react to them):
Put Benchmarks in Their Place
Benchmarks by themselves aren’t completely useless. If your results vary dramatically, they can point you toward potential areas for immediate attention within your program. However, far more useful are the trended benchmarks you compile on your own, using your own data to measure your own success or identify improvements.
Until next time, keep on deliverin’!
Want more e-mail marketing information? ClickZ E-Mail Reference is an archive of all our e-mail columns, organized by topic.