The Email Experience Council’s recent Miami conference highlighted – perhaps inadvertently – the critical role of testing data in assessing your e-mail marketing program’s performance.
As always at e-mail conferences, attendees spent plenty of time debating whether specific e-mail messages adhered to “best practices” or flouted them, either unintentionally or deliberately.
A number of sessions, including one where I was a panel participant, generated some unexpected conversation changes during critiques of actual e-mail messages for how well they met best practices:
- Several times, senders of e-mail messages that appeared to be best-practice train wrecks defended them by saying, “Yes, we tested them, and this is what works for us.”
- In one general session, attendees voted for the version of an e-mail message they thought would deliver the most clicks and orders: the original message and three redesigns by EEC attendees. Audience members preferred two prettier redesigns, but the first message (the client control) and the least popular redesign actually generated more orders for the sender.
- A presenter in a different session was questioned because her e-mail used printable coupons instead of the best-practice recommendation of linking to a coupon on the company Web site in order to generate a trackable click. She replied that the e-mail was designed to generate store traffic. Here, it made more sense to measure success by the number of coupons redeemed.
What’s the lesson here? It isn’t “My instincts can beat up your best practices.”
Instead, these examples underscore why you need to test every aspect of your e-mail program and use the data this testing generates when making decisions and assessing performance.
Example: “Best Practice” vs. “What Works” for Managing Inactivity
The emerging best practice when dealing with dormant addresses says to remove those that have no actions associated with them: no clicks, opens, or conversions.
Although many marketers object to purging non-bouncing addresses (even those that don’t respond), it actually has a basis in deliverability concerns, because some ISPs are starting to base inbox delivery decisions on whether their customers act on your e-mails or just ignore them.
If you implement this best practice without testing it first or using the right metrics to measure the data, you do your business a great disservice.
Conversely, you need to be concerned with averages and how changing the dominator can change those rates you measure. Simply dropping inactive subscribers from mailings raises all your average numbers like open rate, CTR (define), and even complaint rates, because all are based with “delivered” as the common denominator.
- Long cycles affect recipient activity: Suppose your business has a long replenishment or consideration cycle – like seasonal buying, high-ticket luxury goods, appliances, or vacations. Your customers might not buy from or even actively respond to your e-mails between purchase consideration cycles, which could be annually.
Sending them the same “buy now” e-mails won’t move the needle, but removing them without gauging their continued interest means you’ve lost those potential sales.
Or, as in the store-coupon e-mail in my previous example, if your e-mails are designed to boost location traffic instead of Web traffic, clicks are the wrong way to measure recipient activity.
One of my clients has a retail business that caters to birthday parties. Customers are in the market only once or twice a year. It would be foolish to write them off just because they don’t interact with the Web site or generate a trackable action every week.
- Alternative: Change your messaging frequency between purchase anniversaries, for example. This keeps you top of mind without peppering them with irrelevant, action-oriented e-mails.
Again, this doesn’t mean you just let sleeping addresses lie on your list. The inactivity best practice also advises you to try to reactivate addresses whose dormancy extends beyond your normal business cycles before removing them to a separate database.
This retains infrequent buyers and helps your deliverability and performance rates.
What Are Best Practices, Anyway?
They’re tactics that should help you solve problems, correct mistakes, and optimize your e-mail marketing in order to attain that magical e-mail hat trick of more ROI (define), resources, and respect from management.
As my e-mail marketing colleague Loren McDonald likes to point out, however, we should think of them as “generally accepted best practices,” which implies that “best practices” aren’t necessarily universal rules of implementation.
Never implement a best practice blindly, just because someone like me tells you to do it during a panel discussion or in a trade publication. Always confirm it via testing against your own experiences, market, and mailing list.
On the other hand, don’t sneer at a best practice just because it would upset your company’s usual e-mail marketing approach or common belief system. Testing and performance data drive many best-practice recommendations. Always ask yourself, “What could I do differently to get a better result?” Learning the best practices for list acquisition, management, content, frequency, etc. is your first step.
The “what works” defense, after all, can merely defend the status quo. Can you make it work better? Testing can show you the answer. But you must listen to the data, and let it make the final decision for you.
Until next time, keep on deliverin’!
Graze, the snack company which provides nutritious nibbles in slim cardboard subscription boxes, has become a regular fixture in offices, homes and ... read more
Inboxes are so crowded, how can a marketer stand out? Here are eight brands that cut through the noise with great emails. Also, we are all about alliteration.
In theory, having no DMARC record should have no impact on deliverability, but not everyone got that memo.
Ah, emojis, the pictorial representation of stuff in your subject lines. They’re cool, right? When they work, that is. Note: This blog ... read more