Improve Measurement of Behaviorally Targeted Ads

Measurability has been an enviable advantage of online media and advertising since the get-go. It’s an inherent and expected part of what we do as Internet marketers and communicators.

So why don’t we demand more when it comes to measuring the effectiveness of behaviorally targeted placements or campaigns?

About two years ago, when I first started educating myself about behavioral targeting, industry people were talking about how CTRs (define) were better with behavioral targeting than run-of-site (ROS). But as most experienced people in this business know, CTR can also be improved by intentionally misspelling words in headline copy or by making the text blink.

In other words, increased CTR isn’t necessarily something by which to determine marketing or communication effectiveness. At least, it’s nothing to fixate on as a key performance measure.

After the CTR school of behavioral targeting measurement, came the audience composition survey school. In effect, it said, “We can do better than that. Let’s split the impressions against behavioral targeting and ROS and take an audience composition survey to determine whether behavioral targeting really does reach the target more efficiently.”

Did it ever.

The first campaign we ran showed lift of up to 300 percent against our primary target audience at substantially lower cost than ROS. On the media plan, the behavioral targeting impression (define) CPM was a bit higher than the ROS, but the actualized cost to deliver a thousand impressions to our target audience (the targeted CPM) was lower by a good deal.

Audience composition surveys were definitely a step in the right direction. But we can’t stop there. Advertisers, agencies, Web publishers, and behavioral targeting service providers must push further.

I’ve seen a few good examples of how the effectiveness of behavioral targeting placements can be measured just as well as any other tactic on the list. One example are brand advertisers measuring the branding impact of behavioral targeting placements.

Take the American Airlines campaign that ran on “The Wall Street Journal Online,” powered by Revenue Science. AA wanted to deliver awareness of a business-travel-related offering to frequent business fliers. The campaign ran across 14 different Web properties. A behavioral targeting placement ran on The behavioral targeting placement reached business fliers (once a year) and frequent business fliers (five times a year). That’s much better than ROS — more than 100 percent better, according to the audience composition survey.

But AA didn’t stop measuring there. An awareness study attached to the campaign also showed lift across five of six metrics measured and lifts of 200 and 300 percent above average on some key metrics.

“Through the American Airlines case study, we found that not only can behavioral targeting increase the composition of an advertiser’s high-level target demos, but, with careful consideration of the message, site audience, and content, it can deeply affect specific branding metrics,” says Mike Henry, vice president of sales and marketing at Dow Jones Online,’s publisher. “In 2005, some amount of behavioral targeting is included as a component in most campaigns, often with messaging that is customized for the projected audience.”

Holding behavioral targeting placements accountable to the same metrics as overall campaigns is the way forward for buyers and sellers of behavioral targeting targeted advertising. That means measuring branding impact for brand advertisers, and sales and conversions for direct marketers. And, of course, stop benchmarking against ROS.

Related reading