Dear John: Tips for Testing Personalized E-Mail Salutations

If you run into someone you know walking around town, you probably greet her with a hello and use her name. Doing this reinforces your relationship; it tells her you value her, whether she’s a friend, colleague, prospect or client.

The same holds true for email; it’s a relationship, and you should treat it as such. Often a salutation, whether generic or personalized, engages your reader and lifts your CTR (define). Here are a few tips for testing salutations, along with a tale of three different email salutation tests and the results.

Don’t Despair If You Don’t Have Names for Everyone

If you have names for everyone on your email list, you’re lucky. Many companies don’t. But don’t let this keep you from using personalization. In the short run, you can use a slug in place of the name. In direct marketing, “slug” refers to a generic term, such as “customer” or “reader” you can use when you don’t have a name (“Dear Customer” instead of “Dear Jeanne”).

What if you don’t have names for anyone on your list? You may still see a lift from using a generic salutation (“Dear Reader”). See an example in the first test case below.

If including a generic salutation provides a lift, consider gathering names for everyone on your list. Check out my column about a successful project where we did just that.

Don’t Assume Testing Is Expensive

This type of testing involves a small change to creative, a simple A/B split of your list, and basic personalization. Most email programs can handle this out of the box; if yours can’t, it’s time for an upgrade. It’s a small price to pay for a potential lift in your results.

Compare CTR and CTOR

Though a comparison of standard CTRs usually tells you which creative performed better, I prefer to look at click-to-open rates (CTOR) when comparing elements in the email body (not the sender and subject lines). The CTOR will adjust for any differences in each group’s open rates and provide a faithful apples-to-apples metric to compare.

The CTOR is calculated by dividing unique clicks by the unique number of opens. For example:

Control Group Test Group
Number assumed delivered 1,000,000 1,000,000
Unique opens 300,000 250,000
Unique clicks 80,000 80,000
CTR (%) 8.0 8.0
CTO (%) 26.7 32.0

If we just looked at standard CTRs, we might think this was a tie. But when we calculate CTOs, we see the test creative did a better job of engaging the people who opened, resulting in 20 percent lift to the CTOR.

In some cases, notably email newsletters seeking to engage rather than sell, the CTOR can be a measure of success. If you’re looking for some type conversion, say a sale or lead, be sure to incorporate that final step into your success or failure measure.

Follow Testing Best Practices

Best practices include ensuring your test groups are of a significant size (usually, 5,000 or more addresses will do it) and keeping everything but the test variable consistent. There are ways to adjust results and try to control for differences, but it’s a lot of work and adds a level of complexity to the process.

The only time I break this rule is when we have a control that’s really fatigued, with low metrics all around. Here, creating a second version with a variety of new elements (akin to testing a completely new package in the direct mail world) can be a quick way to get the metrics back up. Just realize you’ll need to make the entire new package the control if it wins, because you won’t know what percentage of the lift was caused by which changed elements.

Test Case One

With a recent client, my first salutation test was in an email to current customers. It was a free offer, a no-cost upgrade of their existing service. The control version had no salutation or personalization; it jumped right into the message.

Although the test version was originally to have a personalized salutation, we had a last-minute glitch, and all the messages were sent with the generic salutation, “Dear Customer.” We were disappointed but hopeful the generic salutation would provide some lift. It did. The test version, with the generic salutation, provided a 4.1 percent lift to the CTOR.

Test Case Two

After hearing the results of the above test, another marketing team decided to conduct a salutation test. It was unable to get first names, so it pitted a test version with a generic salutation against the control, which had no salutation. Instead of an upgrade, this was a cross-sell effort. The email was sent to current customers who hadn’t yet purchased the service being marketed.

The results were surprising. The control version had the better CTOR; the generic salutation, which lifted response by 4.1 percent in the first test, decreased response by 13.0 percent. What happened? We’re not really sure. One thought is the salutation was well-received with a free offer but seemed phony with a sales offer.

Another hypothesis is the relationship’s strength made a difference. Though both lists comprise current customers, test case one’s customers are perceived to be more involved with the company than those in test case two. Some of that has to do with the services they’ve bought from us, and the competition and decision process involved with the purchase.

Is it worth repeating this test with the same list? Probably. If the generic salutation depresses response again, we have confirmation. If not, we can test a third time to break the tie and get a read on whether to use generic salutations in the future.

Test Case Three

Finally, I had the opportunity to test a personalized salutation (“Dear Jeanne”) against a control version with no salutation. I could have used the generic salutation as a control or mixed in a third version with this element, but I didn’t. There was just too much going on.

This email went to the same list used in test case one. It was another free offer, this time a sweepstakes. The test version, with the personalized salutation, provided a 13.0 percent lift to the CTOR. This was over three times what we’d seen when we tested a generic salutation against the control in test case one.

Moral of the Story

There are a few lessons to take away from this story. First, you always have to test. What works in one instance may bomb in another. You can’t read a case study and know your results will be the same. Test everything with your own lists and offers before you make a change.

Second, double-check and back-test. It never hurts to confirm results. You shouldn’t just do this once, but periodically. What works today may be tired six months or a year from now.

Finally, always, always test on every send. Testing is the best way to continually tweak email efforts and improve metrics. There are so many things you can test; personalizing and adding a salutation are just the tip of the iceberg. Put a plan in place and go for it. Then, let me know how it goes!


Want more email marketing information? ClickZ E-Mail Reference is an archive of all our email columns, organized by topic.

Related reading