There is a lot of talk and hype surrounding audience verification these days. Advertisers and their agencies like the idea of deploying vendors like comScore or Nielsen along with their campaigns to validate the audiences reached within a campaign. The process for verification is pretty straightforward and most folks probably don't think there is much risk in it. However, by validating audiences at the campaign level, there is the potential for massive discrepancies and friction in the buying process.
I can appreciate why an agency would want to clearly understand the composition of the audience reached with their campaign and limit exposure to people outside their core target. However, the current methods for determining audiences create a number of challenges for both the client and the publisher. When a buyer asks a publisher to deliver a certain audience, say men 18 to 34, there are few ways to approach the targeting. The publisher can deliver the campaign in context, like sports, where they know from prior research that they have a high concentration of men 18 to 34, or they can directly target the audience using either first-party registration data or third-party data sources like BlueKai.
But this is where the friction begins. By using a different source of data for targeting versus verification, there will inherently be a large discrepancy in the audiences reported. When a client receives a report that has a major audience discrepancy, there can be a number of negative outcomes:
At the end of the day, no one likes discrepancies and when they happen it creates more work for both the agency and the publisher. The agency often rushes to reallocate spend or make billing adjustments. The publishers are then wasting inventory, losing revenue, and doing additional work to either justify the inventory quality or the associated audience composition. The vendors and their associated data can also be questioned, in which case they risk losing business and adding confusion and cost to the buying process.
How can we avoid this additional tax on the industry while still helping clients and agencies understand the composition of the audiences they reach with their campaigns? I would propose a universal standard for audience verification that can be used by both marketer and publisher. Or, at least, publishers should be demanding that the same data set used for targeting also be used to verify the campaign. The latter is not as efficient, but in the near term it would go a long way to limit audience discrepancies, provide the marketer with assurance that the right audiences are reached, and help limit lost revenues for publishers.
Meet Your Favorite ClickZ Contributors
Many of ClickZ's leading expert contributors will be at ClickZ Live, the new online and digital marketing event kicking off in New York (March 31-April 3). Hear from the likes of: Jeremy Hull, Lisa Raehsler, Andrew Goodman, Bryan Eisenberg, Mathew Sweezey, Aaron Kahlow, Stephanie Miller, Simms Jenkins, Jeanne S. Jennings, Dave Hendricks and more!
Larry Allen is SVP, Global Platform Sales for Xaxis. He has responsibility for overseeing solutions for publishers including Xaxis for Publishers, Xaxis Exchange, and Xaxis Marketplace globally.
Larry has extensive experience in digital media, marketing, and business strategy unmatched by most standards. Prior to joining 24/7 Media (which merged with Xaxis in 2014), he held senior management positions at cutting-edge digital media companies such as AOL, Viewpoint, Unicast, Yieldex, Real Media, and TACODA.
Larry also ran his own consulting business where he advised many major media companies such as The New York Times, Meredith, 33Across, and Business Insider. He is a frequent contributor to a number of trade publications, blogs, and industry conferences.
A graduate of Clarion University of Pennsylvania with a degree in Business Management, Larry is based in Xaxis' headquarters in New York City.
Follow him on Twitter at @lawrenceallen2.
March 19, 2014