For many organizations, email is a key revenue channel. Failure in execution can mean enormous consequences, so getting it right is critical.
Sadly, for many, quality control amounts to little more than ticking boxes on a pre-deployment QA checklist; one that often has been developed in an ad hoc manner as mistakes are uncovered during deployment.
I’ve been focused on quality control for many years and from our research, more than 10 percent of major brand marketing emails have faulty links and even more have image problems. The problems don’t stop there, either. Spelling mistakes, rendering issues, authentication failures and a myriad of other problems abound. The results include brand damage, angry customers, frustrated management and loss of revenue.
Prior to Innovyx, I worked in two areas that I think are relevant to email from a quality perspective.
The first is publishing. Magazines and even simple direct mail pieces go through multiple rounds of review and approval. The timeframes and costs involved make it imperative that every piece that goes out is right, from copy and layout to registration and color balance. In email production, however, there is no such imperative. Timescales are so short and the perceived cost is so low that quality control takes a backseat to “getting the job done.”
The second is software engineering. When a software program doesn’t work properly the consequences can be significant. Software is also complex and having an effective testing methodology and code coverage analysis are essential to a successful outcome. Though complex and often containing programming, email isn’t software and it’s not usually managed by people with a formal background in testing or quality management.
What I think we can learn from these two disciplines is that success depends on both focus and having an effective strategy. If we are to create an effective testing and quality control strategy, we must start by understanding the moving parts of quality control.
Quality control is a compromise — a balance of risk and reward.
I’ve often heard clients claim that they have zero tolerance for errors. However, testing every campaign so thoroughly that there is no possibility of a mistake slipping through is so expensive that it’s simply not worthwhile. Inherently, it’s necessary to understand how much risk is right for a given situation.
What drives this is the quality control triangle made up of three competing factors: cost, coverage and consistency.
Cost — how much you spend performing quality control. This may be temporal as well as financial.
Coverage — how many things you’re checking. What to check is a study in and of itself.
Consistency — how consistently and reliably things are being checked. There’s often a disconnect between theory and reality on this.
Changing one element will have a knock-on effect on the others. For example, increasing coverage will either also increase cost or reduce consistency. Reduction in cost (excluding efficiency gains) will only be achieved by compromising on coverage or consistency.
All this is quite abstract, but it has very real and practical implications for marketers trying to improve the quality of their work and reduce the time they spend sending apologies and corrections.
The first is that it’s not practical to check everything every time and the “kitchen sink” approach to quality control procedures doesn’t lead to the fewest errors. To put it another way, the best pre-deployment checklist may not be the longest and the best response to an error may not be to add another check to the checklist. In fact, there are times when the error rate may be reduced by shrinking the checklist!
A second implication is that reducing quality control cost (or time) without identifying specific efficiency improvements will inevitably result in reduction in either coverage or consistency with a commensurate increase in error rate.
So far, we’ve only scraped the surface of what it takes to have an effective and consistent quality control process. There are two key components to a successful quality control strategy that I will look at in future articles.
The first is, what should you be testing? In the balancing act of risk and reward, where do you get the most bang for your buck and how do you know? The second is, what tools should you be using? Efficiency gains are the only way to change the cost/coverage/consistency dynamic and tools are key to realizing efficiency gains.
Right now though, considering cost, coverage and consistency, how does your quality control process stack up? Is it best in class or is there room for improvement?
Until next time,
Derek