The State of Interactive Research

Chances are if you’re in the interactive marketing business, things are going pretty well these days. Budgets are up, enthusiasm is high, and optimism is in the air. More and more, we’re speaking with our clients about what we should do, as opposed to if we should do it.

I don’t mean to rain on the parade, but I fear unless we step back for a moment and take a look at the state of interactive research (syndicated research), we may find ourselves dealing with widespread mistrust. I’m speaking primarily about the two leading syndicated ratings companies: Nielsen NetRatings and comscore Media Metrix.

It really is time to nip some issues in the bud.

Glaring Issue #1: Syndicated Numbers Don’t Match (They aren’t even close) To illustrate this, I picked one site for comparison: CNN Money. I looked at the latest information from April 2004, combining home and work populations. There was significant variance in the most basic information: total unique visitors. Nielsen reported 8.2 million unique visitors; comscore MediaMetrix reported 5.5 million. Which is correct?

Glaring Issue #2: Syndicated Numbers Don’t Match Server Logs Nothing new here. This age-old issue has a new twist. Web server information is widely believed to be the most accurate accounting of unique site reach. However, marketers and agencies look for standardized data (delivered by a third party) across all properties.

Many publishers use cookies to track unique visitor information and arrive at their internal traffic numbers. The longstanding belief is that less than 3 percent of all Web users delete their cookies on a regular basis, so this method should provide a relatively accurate account. In discussions with Nielsen Netratings, new information indicates upwards of 30 percent of users are deleting their cookies (either manually or through installed software) on a regular basis. They’re going to do some testing to confirm this in the coming months. But if it’s true, it will once again call into question which numbers are the “right” ones.

Glaring Issue #3: Inconsistent Naming Conventions Does “Walt Disney Internet Group” (comscore) = “Disney Online” (Nielsen)?

Does “NY Times Digital” (comscore) = “NYTimes.com” (Nielsen)?

The answer in both cases is no. There seems to be no rhyme or reason between the naming conventions of the two syndicated services. As you’d imagine, this affects everything from competitive reporting to reach and frequency forecasts.

I’m happy to report industry organizations (IAB, OPA, etc.) are currently working towards naming and classification normalization. Let’s hope they come to some solution in the near-term.

Update On Other Industry Initiatives

In addition to solving the naming convention issue, it may be prudent to quickly mention some of the other industry initiatives going on in the standards and practices space. For the most part, major initiatives fall into two buckets: auditing/impression counting methodology and reach/frequency.

The AAAA (American Association of Advertising Agencies) and the MRC (Media Rating Council) are working on an initiative that will create a standard definition of an ad impression. That standard will then be audited and certified through the comprehensive MRC auditing process. While significant headway has been made on the publisher side, there are currently some barriers in getting the large ad serving companies (Doubleclick, Atlas) to play along. It’s imperative both sides become accredited. We should all urge our ad serving partners to move ahead with this critically important initiative. If we exert enough pressure, they’ll have no choice but to capitulate.

To round out the industry initiatives, lots of effort is going into the reach and frequency arena. Specifically, the ARF (Advertising Research Foundation) is spearheading initiatives in a number of areas. They’re looking to initially run the same ad campaign through the three leading systems (comscore, Nielsen, and Atlas) and identify the differences in outputs. They’re also looking to compare reach and frequency outputs using server-centric vs. panel-centric data. Finally, they hope to commence a “cume study.” It would use hundreds or thousands of online campaigns to build stable (and accurate) reach curves.

We should probably walk before we run. We’re still working out some of the basics. We’ll get there eventually. Of that, I’m confident.

Related reading

nurcin-erdogan-loeffler_wikipedia-definition-the-future_featured-image
pwc_experience-centre_hong-kong_featured-image
12919894_10154847711668475_3893080213398294388_n
kenneth_ning_emarsys_featured-image
<