It happened again last week. A client sent an e-mail campaign using a new vendor. When the results came in, they were phenomenal! More than 3 times the open rate and 10 times the click-through rate, as compared with the prior campaign. We were thrilled… until we learned the results were actually the same as they were with the old vendor. They merely appeared higher because they were calculated differently.
This is the third time this has happened in one year. I’m tired of it. Why can’t this industry agree on measurement metrics? Is this even an issue that anyone else has noticed?
Jeanniey Mullen: Do other people feel a lack of measurement standards is a real issue that affects a number of marketing efforts?
Deirdre Baird: I chair the Email Experience Council‘s deliverability roundtable. Our first initiative was to assess the state of industry metrics and bounce management. This week, we’re releasing an extensive whitepaper and the survey results from 321 mailers and 29 ESPs [e-mail service providers], representing thousands of client companies. What we discovered is others feel the same way you do, and we’ve also uncovered three key reasons:
- Conflicting metrics. There’s a lack of consistency in calculating key performance metrics (delivery, open, click rates) that makes it impossible to establish industry benchmarks or compare results.
- Inconsistent bounce data and definitions. Getting standardized, accurate bounce data from ISPs is a top concern, but there’s no industry consensus on what the key terms mean, such as “hard bounce,” or how they should be applied.
- Inadequate bounce management. Everyone agrees that e-mail deliverability is very important, but many lack the reporting systems to really understand their results or act on them.
These results paint an alarming picture and should serve as an industry wakeup call to address our inability to define, calculate, view, and act on key metrics.
Dave Lewis: The consequences of what Deirdre has outlined are important to understand. Without reliable metrics and performance data, e-mail marketers are flying blind. They don’t know their “real” results, as your client discovered. Nor can they adequately maintain their lists or proactively manage their practices, all of which increases their deliverability risk. This situation makes it extremely difficult to compare results across vendors/solutions or even to value e-mail relative to other channels. I see this latter point as a threat to the channel itself. Without standardized metrics, you’re significantly disadvantaged when it comes to the budget battles over marketing dollars.
JM: Given that these issues are broad, how do you see marketers and mailers reacting to this information? Can they leverage it to improve what they are doing?
DL: As Deirdre said, our findings should serve as a wakeup call for the industry, including mailers. As every practitioner of DM knows, the devil is in the details. E-mail marketing is no different. Our report clearly indicates the areas that mailers should examine in assessing the quality of their own performance metrics, bounce definitions, bounce management practices, or those of the service provider that supports them. In a very real sense, we’re empowering mailers with the information to ask the right questions, and that will contribute to the momentum for change.
JM: How do you think ESPs will react? Do you think it is possible for all ESPs to ever agree to some standardized processes?
DB: I believe ESPs will respond favorably to any endeavor that is in the best interest of their clients. We shouldn’t forget that ESPs are forced, due to inadequate, incomplete, and sometimes inaccurate data supplied by the ISPs, to try to create meaningful metrics for their clients to use. Their efforts in this respect are almost universally more successful than that which in-house mailers have accomplished and should be applauded. However, even if ESPs were all to agree to utilize the same metrics, their ability to calculate metrics with absolute accuracy will be hampered by the data they receive from ISPs. So we need to be realistic about all metrics and, in addition to defining them, be clear on the limitations.
We aren’t espousing that certain metrics be abandoned for others. Even though different ESPs calculate certain metrics differently, that doesn’t mean the metrics they are using have no value. Click rates, when calculated using unique clickers versus total clicks, are both valuable metrics, and an ESP who uses one versus the other would certainly be able to present a sound argument as to their choice. We aren’t suggesting one should be abandoned, but rather that we either use different names for these two different metrics or, minimally, clearly qualify on reports how they are calculated so that mailers are more readily cognizant of what they are measuring.
JM: Realistically, how long do you think it will take for the industry get to a point where we have standards that work?
DL: There’s no question that achieving consensus on standardized metrics, definitions, and practices will be the hard part. But the basis for that consensus is in our findings, and I’m confident we’ll get there, partly because we have no other choice. Our current metrics muddle serves no one, and e-mail marketing has matured to a point where standard metrics and measures are required, especially if we’re to hold our own with other DM channels. The catalyst for change will come from two sectors: the mailers who will demand consistent metrics and more visibility into their results from the IT departments and vendors that support them, and the industry analysts and pundits who assess the various service and solution providers and examine e-mail marketing in a multichannel context.
Want more e-mail marketing information? ClickZ E-Mail Reference is an archive of all our e-mail columns, organized by topic.