Digital MarketingEmail MarketingIndustry Delivery Benchmarks Are Meaningless

Industry Delivery Benchmarks Are Meaningless

Drill down to collect and analyze these seven internal metrics to assess e-mail campaign performance.

Whenever I talk with a client about metrics, the conversation often turns to benchmarks: “We’re doing great because our average open rate is 5 percent higher than the industry benchmark.” Or, “What’s the average click rate?”

Given the metrics that e-mail generates so effortlessly, I understand why people want to compare their programs against some industry standard to assess how they’re doing.

Problem is, general industry benchmarks don’t accurately reflect your program’s performance. If your e-mail program doesn’t match the benchmarked population for list acquisition and quality, expectation setting at opt-in, content type, frequency, and delivery methods, the numbers are useless for anything more than a passing comparison.

We’ve seen, for example, that permission-based house lists deliver better results than third-party rentals with murky permission history. You might think, then, you’re doing pretty well if your newsletter’s 30 percent open rate beats some industry-reported number of 20 percent for business-to-business (B2B) promotional mailings.

But if that benchmark mixes opt-ins with opt-outs, one-offs with newsletters, biweeklies with semiweekly mailings, you probably aren’t doing as well as you thought.

Drill Down for Meaningful Statistics

How does that 30 percent compare to your performance over the last 10 to 20 campaigns? You might find your open rates fall by 2 to 3 percentage points each campaign.

That’s a serious problem measuring by industry benchmark won’t help you solve. Once you know your metric trends, however, you can work backwards to find out what’s causing the problem and how to improve.

Even benchmark reports from delivery service providers (DSPs), such as EmailAdvisor, Pivotal Veracity, Return Path, and Habeas, and e-mail-service providers (ESPs) aren’t absolutes. Applications can vary in the way they collect, measure, and report results.

For example, DSPs use seed addresses as a major tool to monitor delivery results. This means they take a sample and apply results, the way you would with a survey. This method is useful and can identify many problems, but it shouldn’t be considered an absolute number.

Does the application or ESP define “delivered e-mail” as e-mail delivered only to the inbox? Or does it include e-mail delivered to the bulk folder as well? You can see where differing definitions can change the number you use to compare among services or how you judge your own program.

Just to muddy the waters even more, even ESPs can have small variations in how they report delivery. Most count delivered as sent messages minus bounced messages. While this is a good definition, in rare cases messages aren’t delivered to the recipient’s inbox or junk folder and no bounce message is returned. This creates a falsely high delivered rate.

The Numbers to Look At

I like to look at trend numbers from the following basic campaign metrics in unison to alert me to problems (I also try view the same data across the top 10 domains to help highlight changes quickly so I can react to them):

  • Sent. This number should match the number of subscribers you intended the campaign to go to. When it doesn’t and the loss is 10 percent or higher, it’s time to start digging. Often, this is a problem with your segmentation strategy.
  • Bounced. This should identify both hard and soft bounces. Large volumes of hard bounces usually indicate problems such as bad data hygiene, poor sign-up practices that accept invalid e-mail addresses, and ISPs blocking your domains. Large volumes of soft bounces can mean receivers are deferring your messages, either because of full inboxes or temporary problems on its end.
  • Delivered. 100 percent delivery is ideal but not practical for multiple reasons. E-mail accounts sometimes get deactivated even on the best-maintained lists, employees change jobs, and consumers abandon addresses. Yet, anything less than 98 percent means you should look at your bounce data, review the error messages, and determine how to correct the problems. Look for content-filter blocking and ISPs blocking by IP address or domain. Most actionable information should be found in the delivery log files.
  • Open. This is a flawed metric for both sender and receivers and has been generally falling for most senders. Use it only to identify large problems, such as zero opens for a major domain or sudden drops outside of the normal trend. When used with delivery data, it can highlight problems like missing reported bounces or bulk-folder delivery.
  • Clicks. Clicks generally determine engagement but can also be used to find delivery problems. If the click rate for receivers at one domain differs drastically from all other domains, check for bulk-foldering or changes in user interface (new error messages, rendering changes, etc). This is the best metric to use for determining active recipients.
  • Unsubscribe. While you want this number to be small, it’s seldom zero. As a trend metric, it can highlight when recipients are unhappy with messaging, but don’t be misled into thinking a low number means your subscribers are happy. They could be hitting the spam button or just deleting your messages without reading.
  • Feedback loop complaints. Like unsubscribe, this number is seldom zero. In fact, zero can indicate a significant problem, such as blocking at the feedback-reporting domains. Sudden spikes or increases denote unhappy recipients or confusion over your content, and can begin to tarnish your sender reputation. Aim to elicit a complaint rate of 0.1 percent. If you go over that, you might not get blocked instantly, but you should work to bring complaints down.
  • Put Benchmarks in Their Place

    Benchmarks by themselves aren’t completely useless. If your results vary dramatically, they can point you toward potential areas for immediate attention within your program. However, far more useful are the trended benchmarks you compile on your own, using your own data to measure your own success or identify improvements.

    Until next time, keep on deliverin’!

    Want more e-mail marketing information? ClickZ E-Mail Reference is an archive of all our e-mail columns, organized by topic.

    Related Articles

    Why no-one is reading your emails (and what they’re doing instead)

    Email Why no-one is reading your emails (and what they’re doing instead)

    2y Sarah Perry
    Reviving sluggish sales with email personalization

    Conversion & ROI Reviving sluggish sales with email personalization

    3y Guest Writer
    Four ideas to inspire your next email marketing test

    Advanced Email Marketing Four ideas to inspire your next email marketing test

    3y Jeanne Jennings
    A beginning of year audit checklist for your email marketing program

    Email A beginning of year audit checklist for your email marketing program

    3y Derek Harding
    Two more New Year’s resolutions to add to your list

    Email Two more New Year’s resolutions to add to your list

    3y Scott Roth
    Setting the stage for a successful email marketing automation campaign

    Email Marketing Optimization Setting the stage for a successful email marketing automation campaign

    3y Jeanne Jennings
    How top email marketers defend value by ensuring trust

    Email How top email marketers defend value by ensuring trust

    3y Scott Roth
    Email killed the promotional planning star

    Email Email killed the promotional planning star

    3y Kara Trivunovic