No Online “Joe Millionaire” (Thank Goodness!)

We all reveled in the likes of Joe Millionaire, Michael Jackson, American Idol, even Saddam Hussein. That’s finally over. To quote Jeff Zucker, president of NBC Entertainment, “all I can say is ‘Thank God’.”

The conclusion of television’s sweeps period, with its attendant hype and frenzy, once again highlights weaknesses in audience measurement systems — even in the most traditional of media.

James Surowiecki put it well in his “New Yorker” column last month, when he wrote: “There are three important things to know about sweeps. The first is that they are deeply flawed, and of little use, in the end, to the networks, the advertisers, and the viewers. The second is that everyone in television knows this. The third is that no one has done anything about it.”

Just after Surowiecki’s presumed deadline, Nielsen Media Research, the organization behind sweeps frenzy, announced it would expand the reach of its “portable people meters” to the top 10 U.S. markets. This may finally result in technology sweeping aside the old-fashioned diary method, which unsurprisingly delivers unreliable results.

This column isn’t about television; it’s about the most measurable of media, the Internet. Online could be said to suffer from a surfeit, rather than a dearth, of technology and information. Where do we turn for the truth?

“The person with one watch always knows what time it is, and the person with two watches never knows what time it is,” says Gabe Samuels, senior vice president of information at the Advertising Research Foundation (ARF). “There are two watches on the Internet. In fact, there are at least four.”

Those four are the two dominant panel-based measurement companies, Nielsen//NetRatings and comScore Media Metrix; and the two dominant ad serving technology firms, Avenue A’s (soon to be renamed aQuantive) Atlas DMT and DoubleClick.

A flurry of news stories detailed comScore Media Metrix’s restatement of three months of data this week, a re-jigging of filtering and projection methodology it characterized as a tweaking of methods it developed after its purchase of Media Metrix. (Because of a patent dispute, the original Media Metrix panel software went to Nielsen//NetRatings rather than comScore, so comScore had to start from scratch in that area.)

Re-jigging research methodologies happens all the time, most often without such straightforward disclosure. This restatement was seen by some to be more evidence of Web audience measurement systems’ unreliability. Personally, I was happy to see the Internet audience measurement issue raised in high-profile national consumer publications. But I don’t believe reliability is the relevant issue. After all, ComScore didn’t have to come clean with its methodology problems. It could have simply changed its methodology going forward and forgotten about it, as so many other research companies do. But the restatement allows for apples-to-apples comparisons between months going forward.

Apples-to-apples comparisons are rare in the Internet business. It’s amazing how long the industry has managed to survive without a real definition of an impression, much less any sense of what Web site audiences really look like. That we’ve managed to get this far speaks to the resilience of interactive media buyers and sellers.

Although we need to move forward, I don’t believe we should abandon the strengths of the Internet — its inherent measurability and the various sources of information on audiences — and simply adopt a single solution. Obviously, there are some who would like that to happen.

“There are still a lot of other players out there, but there’s not one major book of numbers, and we think that Nielsen//NetRatings is positioned to be that service,” said Sean Kaldor, director of marketing for NetRatings.

I look forward to the results of an ongoing ARF study. I encourage sites, marketers and measurement companies to cooperate. The organization is gathering data about a variety of campaigns — representing a number of different industries and a range of media outlets. Data are coming from every conceivable source: publishers, ad servers, and panel-based measurement companies, with the aim of pinpointing the differences between the various numbers that will result.

“They’re not going to be small differences, they’re going to be large, but they’re not going to be easily predictable,” said Samuels. “We’re trying to understand why the differences occur. If you can convert from one set to another set, then there’s no problem. With any luck, within a few months, we’ll have some answers… or at least some more questions.”

The trend toward incorporating reach and frequency measurement into the media planning process may complicate things further, given each solution is likely to incorporate both server and panel-based data. What’s the best combination?

The good news is people are asking these questions, and this industry isn’t firmly entrenched in deeply flawed research traditions like sweeps. Here’s hoping it never will be.

Related reading