Taking Stock of Our Toolbox, Part 1

As we round midyear and look toward contract renewals for 2004, we’re starting to have update meetings with many syndicated research companies in the Internet space. Though hardly a revelation, it’s increasingly clear as an industry we have a number of duplicative services providing nearly identical intelligence. (Don’t be silly — of course the numbers don’t match. But they are trying to measure the same thing.)

Although we subscribe to nearly all the available tools, certainly some important points of distinction warrant investigation. We divide the syndicated research world into three big buckets: general industry landscape/trends, online ratings services, and competitive intelligence.

General Industry Landscape/Trends

Many players live in this space. We view Jupiter Research (a unit of this site’s parent corporation) and Forrester Research as our primary sources for interactive trends and usage insights. Each has its strengths and weaknesses. Forrester seems to do a more thorough job covering the technology vertical as well as covering broadband, wireless, and interactive television. On the other hand, Jupiter provides more insight into the consumer packaged goods (CPG) vertical and online advertising in general.

Perhaps it’s a sign of the times, but both these services seem to have significantly reduced the regularity with which they publish new research. Whenever we cite information older than 12 months in client presentations, clients ask, “Isn’t that information already out of date?” I say they’re absolutely right.

Online Ratings Services

Where the battle is waged and the fate of our industry lies. There’s been significant consolidation and reorganization in this area in the last few years. Two companies and four primary services are left standing. Nielsen//NetRatings weighs in with its NetView and @Plan products. comScore comes to the table with AiM and MediaMetrix.

AiM and @Plan live in the recall/questionnaire camp. Although each claims to be markedly better than the other with better questionnaires and more frequent data collection, the primary methodology (in the opinion of many) is deeply flawed. The notion behind these services is to ask a group of online users to fill out a questionnaire on a quarterly basis and report back on their on- and offline usage. Through this, the services gain rich qualitative insights. Maybe I’m unique, but I can’t remember what I did yesterday, let alone which Web properties I visited in the past 30 days (as the questionnaire requires). Though the data output is appealing from a qualitative standpoint, the adage “Garbage in, garbage out” comes to mind.

In the other camp are metered services, which solicit users from at-home and at-work perspectives. Users install software on their PCs, allowing the research companies to track their online activities. Though there are many limitations with these services as well, including a very limited at-work sample (typically, more small to mid-sized companies allow tracking software installation than larger enterprises), they do provide a rich data set from which to glean insights.

Assuming the metered services provide the most accurate data, the next question is, “What can we do with the data?” This is where the most activity currently resides: reach/frequency/gross rating points (GRP).

Ask anyone who’s test-driven WebRF (a joint venture between Nielsen//NetRatings and IMS), and you’re likely to get a strong reaction. Let’s just leave it at the product has a long way to go. Usability issues aside, there are some real, fundamental flaws with some underlying assumptions in the program (can you accurately predict a six-month campaign’s reach and frequency from one month of data?).

Recently, comScore released a beta of its new reach and frequency tool. Though much more intuitive, it also has some real interface problems (e.g., If you change the campaign duration, available page impressions on a site don’t change accordingly). Perhaps comScore’ll solve this glitch after the beta phase.

Take all these issues and marry them with some tests we have done at the agency comparing WebRF outputs to actual campaign delivery (as measured by DART). Your head will spin. Results have been all over the board. Actual campaign delivery didn’t resemble the WebRF outputs at all from a reach and frequency standpoint. This raises fundamental questions about the quality of the data going into the calculation, as well as the methodology used to derive reach/frequency/GRP. We’ve not yet tested the reach forecaster product from Atlas DMT, but I’ve heard good things about it. Before we admit defeat, we should probably give that a try.

Next column, we’ll look at the many competitive tracking systems in the Internet space and examine their pros, cons, and accuracy (or lack thereof). We’ll also spend a little time dreaming about tools we wish we had and would include if we were writing Nielsen’s or comScore’s business plan for the coming year.

If you could ask for any planning tool improvement, what would it be? Drop me a note. I’ll report the findings.

Join us at the Jupiter ClickZ Advertising Forum in New York City on July 30 and 31.

Related reading