Advertisers and agencies generally ask the same four questions about behavioral targeting: Does it work? Do advertisers buy it? Are there differences among the various systems and implementations? And, how do you measure campaign effectiveness?
Here are the answers:
Yes. Behavioral targeting works and can be quite effective, when used appropriately and results are correctly tracked and interpreted.
Yes, advertisers and agencies buy a lot of it. Probably not nearly as much as sellers and service providers would like, but it's a new market and these things take time.
Yes. All behavioral targeting isn't alike. Various technologies work quite differently and use different methodologies. Their core systems function differently, in some cases dramatically so. Advertisers should expect quite different campaign results from sites using different services.
Behavioral targeting on Yahoo differs from Advertising.com, which differs from LATimes.com which differs from The Wall Street Journal Online. Beyond technical system differences, each organization typically designs and uses targeting in ways that uniquely take advantage of its specific audience and environment. Comparing campaigns across publishers with different systems is comparing apples to oranges. This isn't to say campaigns that work with one publisher won't work with another or that campaigns that didn't work well with one won't work well with another.
If you want to get to the bottom of these differences and truly measure effectiveness, you must understand the following.
Establish clear objectives for the campaign, whether branding or communication metrics based (e.g., reach and frequency); or desired response based (e.g., audience transfers (clicks), prospect acquisitions, or sales). You must clearly define your desired target audience, their behaviors, and a basis for those behaviors. That last part is critical.
If you want to target users who are in the research stage for buying mobile phones, be certain the publisher's definition of "in-market mobile phone buyer" includes users who recently visited mobile phone plan configurators or pages with feature stories on the latest mobile handsets.
You don't want your messages delivered to people who navigated to articles that contained terms such as "cellular," "mobile phone," or "Motorola." If you do, you'll find the majority of the audience read about city council approval for new "cellular" transmission towers; schools that ban children from bringing "mobile phones" to class; or "Motorola" stock reports. (Test it: Enter those words or similar ones into a major news site's internal search box and see what comes up.) Such stories have nothing to do with interest in buying phones. Targeting ads for a new Sprint PCS offering to such pages is akin to targeting Ford F-150 ads to pages with news stories about EPA investigations into emissions compliance records of pickup trucks. It's not just a wasted ad delivery, it could damage your brand.
First, ensure you received the right target audience. An appropriate metric might be audience composition; what's the makeup of the audience who viewed the campaign? Don't get too excited about "relative" audience composition metrics, like when a publisher tells you your business-traveler-targeted campaign reached 300 percent more business travelers than if you'd bought a run-of-site campaign.
Those numbers may tell the publisher how efficiently it uses its inventory and may help you understand if you got a good deal on a targeted campaign versus a run-of-section or -site campaign. But they don't tell you how effective the ad and delivery were. You bought the campaign because it was targeted, so you expect the target audience composition to be dramatically higher than that of the site in general. If it weren't, you wouldn't have bought it.
Audience composition indexes are interesting metrics. They help publishers efficiently price campaigns and inventory but say nothing about campaign effectiveness. Don't think you had a great campaign because someone told you it had a 300 percent lift in audience composition. That's largely meaningless and should only serve as a basic validation of the targeted delivery.
If relative audience composition metrics don't communicate effectiveness, what does? For a branding campaign, delivered reach and frequency against the target audience; audience responses by creative; and lift against intended branding metrics, such as brand awareness, brand favorability, or purchase intent, are more accurate measures of success. The former metrics are generally delivered as a report from the behavioral targeting service or ad server. They likely differ by service and system, so be sure to get a full explanation of the numbers and what they're based on. Lift metrics are usually delivered in the form of pre- and post-campaign reports from a service such as Dynamic Logic.
Truly understanding results from such studies is critical. If you're interested in top-of--funnel branding, such as for a manufacturer client, then awareness metrics are key. If you're interested in bottom-of-funnel branding, such as for a retailer, then purchase-intent metrics are key. Whichever it is, if your target audience was defined correctly, the behavioral targeting methodology is correct, and the creative is strong, these numbers will move -- significantly. If they don't, examine the three dependencies: target audience definition; segmentation and targeting methodology; and creative, to pinpoint the trouble.
It's essential you take the time to ask questions and learn how metrics are built and how they work. Challenge them when you don't get the results you seek. If the creative performs well on other campaigns with similar target audiences and your behaviorally targeted campaign results in flat branding lift numbers, challenge them. Understand the differences, and learn if something is wrong.
When campaign goals are more response-oriented, establishing benchmarks is relatively straightforward. Whether your client wants registrations, completed applications, or sales, clearly identify the metrics you'll track and their sources. If you truly want to optimize results, share this information with the publishers as the campaign progresses, so they can optimize segmentation and targeting.
Many agencies and clients aren't comfortable with this kind of transparency. That's unfortunate, as publishers with sophisticated targeting capabilities can really make a difference if they can optimize against back-end data. Under any circumstance, in this area as the others, it's essential to understand the source of the data and the methodology that produces it. This means lots of questions and lots of skepticism.
Hype Vs. Reality
The only way to truly understand behavioral-targeted campaign effectiveness is to dig beneath the hype and uncover reality in numbers and performance. Behavioral targeting is already over-hyped. With frothy capital markets and IPO hopes looming, the hype will only intensify. This will attract more vaporware. We saw it during the dot-com bubble, and we'll see it again.
If it's too slick and too good to be true, it probably is. Ask hard questions. Demand clear answers.
Meet Your Favorite ClickZ Contributors
Many of ClickZ's leading expert contributors will be at ClickZ Live, the new online and digital marketing event kicking off in New York (March 31-April 3). Hear from the likes of: Jeremy Hull, Lisa Raehsler, Andrew Goodman, Bryan Eisenberg, Mathew Sweezey, Aaron Kahlow, Stephanie Miller, Simms Jenkins, Jeanne S. Jennings, Dave Hendricks and more!
March 19, 2014