Benchmarks: Worthwhile or a Waste of Time?

Four questions to consider before using that e-mail benchmark.

If you’re a regular reader of this column, you know I’m a fan of benchmarks and industry-average metrics. I like to quantify things, to use numbers to set goals and gauge e-mail campaigns’ success or failure, strengths or weaknesses. In this column, I’ll go over some things to consider when looking at a benchmark. Next, I’ll cover the right and wrong ways to use industry-average metrics.

Before you rely on any benchmark, understand how it was developed and what its biases and limitations may be. Be skeptical; that old quote about lies, damn lies, and statistics is completely relevant here.

With e-mail benchmarks, I start by considering the following:

  • Who’s the source of the metric?
  • What population was studied?
  • Over what period of time?
  • What type of e-mail was sent?

The Source

Most e-mail benchmarks available to marketers come from e-mail service providers (ESPs). They’re in a good position to provide them, too. They have access to all their clients’ e-mail metrics, which are the raw ingredients for calculating an industry average. There’s consistency across the data, since it comes from a computer system that calculates opens, clicks, and so forth the same way across the board.

But ESPs have an inherent bias. It’s in their best interest to show e-mail marketing works. Would they publish metrics if the open or click-through rate were zero? Probably not, since there’s usually an open or a click associated with a successful e-mail campaign.

Industry publications are another frequent source of benchmarks. In this case, they’ll poll their readers and ask what opens, clicks, conversions, and the like they see in e-mail marketing efforts. The raw data is self-reported, adding some caveats. First, it’s not a complete view of the population (whereas figures from ESPs should be); only those who decide to respond are counted. This self-selection can skew the data, since the sample is neither complete nor completely random (as statistical relevance requires).

There’s also no way to confirm all metrics are calculated in a consistent manner. One respondent may look at total clicks while another looks at unique figures, so you’re mixing apples and oranges. Finally, you’re relying on a person (rather than a computer) to provide an accurate figure. People are forgetful, they sometimes exaggerate, and, if the incentive for providing the data is enticing enough, they may even just make up figures.

The Population

Who was studied? Most ESPs, publications, and others who develop benchmarks cater to specific target audiences. The more you have in common with that group, the more likely the metrics are relevant to your business.

If you’re a non-profit using e-mail for fundraising, the retail average revenue generated per e-mail sent is a benchmark to consider but not to live and die by. Likewise, if an ESP’s clients are mom-and-pop firms with low quantity e-mail lists, their metrics aren’t a good benchmark to use when you do projections for your Fortune 500 company’s send to 2 million e-mail addresses (as smaller lists tend to have higher metrics).

In addition to whose results are based on, you want to look at whom they sent to. Third-party e-mail lists tend to have much lower response rates than house lists. Opt-in lists usually show higher metrics than opt-out lists. If you’re sending to businesses, you probably don’t want to rely on metrics based on sends to consumers. The closer the senders and their audiences are to you and your audience, the more likely the benchmark is relevant to your project.

Time Period

In statistics, the more data you collect the more relevant results are expected to be. Basing metrics on multiple sends is a better gauge than relying on just a single send. Factoring in data from a longer time period is better than looking at just one day of sends. Be sure you know the data collection timeframe.

Also look at macro-environmental factors during the period. Charitable contributions rose in the aftermath of 2004’s Asian tsunami and 2005’s Hurricane Katrina. Should you use revenue generated per e-mail sent from that period to project charitable contributions for Q4 2006? Probably not.

Type of E-Mail

The type of e-mail sent can make a big difference as well, especially when we’re talking about click-throughs and revenue-based metrics. Asking people to click through to read more of an editorial article is a very different call -to -action than asking them to click to buy now. If you do the later, be sure the benchmark you’re working from does the same.

I’ve also conducted e-mail campaigns in which the call to action was offline; you may use e-mail to drive recipients to brick-and-mortar stores for events. A click isn’t required for conversion (although we did provide links to get directions and other information). Using the metrics from one of these to build projections for a direct sale e-mail wouldn’t make sense.

Are Benchmarks Worthwhile?

Those who dismiss benchmarks as a waste of time are overreacting. We use benchmarks in other areas of our lives all the time. Have you purchased or sold a home lately? The price was likely calculated based on sales of similar homes in the area; a benchmark. Ever negotiated salary with an employer or an employee? There’s a high likelihood you or the person on the other side of the table used a survey of what professionals in similar positions in your industry get paid to come to a final figure; again, an industry-average metric.

The key to using benchmarks is knowing how to use them effectively. Next, we’ll go over the right and wrong ways to use industry-average metrics and provide some guidelines for making the most of their value. In the meantime, I’d love to get your thoughts on benchmarks: how do you use them, what benefit do you get from them, and what are your favorite sources for them?

Until then,

Jeanne

Meet Jeanne at E-Mail Marketing, the first in the new ClickZ Specifics conference series, October 24-25 in New York City.

Want more e-mail marketing information? ClickZ E-Mail Reference is an archive of all our e-mail columns, organized by topic.

Subscribe to get your daily business insights

Whitepapers

US Mobile Streaming Behavior
Whitepaper | Mobile

US Mobile Streaming Behavior

5y

US Mobile Streaming Behavior

Streaming has become a staple of US media-viewing habits. Streaming video, however, still comes with a variety of pesky frustrations that viewers are ...

View resource
Winning the Data Game: Digital Analytics Tactics for Media Groups
Whitepaper | Analyzing Customer Data

Winning the Data Game: Digital Analytics Tactics for Media Groups

5y

Winning the Data Game: Digital Analytics Tactics f...

Data is the lifeblood of so many companies today. You need more of it, all of which at higher quality, and all the meanwhile being compliant with data...

View resource
Learning to win the talent war: how digital marketing can develop its people
Whitepaper | Digital Marketing

Learning to win the talent war: how digital marketing can develop its peopl...

2y

Learning to win the talent war: how digital market...

This report documents the findings of a Fireside chat held by ClickZ in the first quarter of 2022. It provides expert insight on how companies can ret...

View resource
Engagement To Empowerment - Winning in Today's Experience Economy
Report | Digital Transformation

Engagement To Empowerment - Winning in Today's Experience Economy

2m

Engagement To Empowerment - Winning in Today's Exp...

Customers decide fast, influenced by only 2.5 touchpoints – globally! Make sure your brand shines in those critical moments. Read More...

View resource