Lies, Damned Lies, and IAB Statistics

Many online marketers like to keep track of numbers. Fortunately, there’s no shortage of them in this business. From the campaign statistics reported by Web sites and ad servers to industry statistics reported by research firms and trade groups, numbers litter our inboxes. Lately, I’ve started digging into the data behind all these data. Often, I’m surprised at what I find.

Internet Ad Revenue Report

One set of numbers worth closer scrutiny is the recent Interactive Advertising Bureau (IAB)/PricewaterhouseCoopers (PWC) Internet Ad Revenue Report. At first, the report on second quarter revenues appears similar to other industry accounts. It says the online ad spend grew slightly from the first quarter to the second and estimates the industry will pull in around $6.5 billion this year. But when the report delves into detail and tries to break out revenue by “advertising vehicle,” the numbers stop making sense.

I’m among the first to admit industry statistics are rarely perfect. But they’re rarely this imperfect, either. The IAB is attempting to illuminate its members’ industry with a broken flashlight. It created a partial list of popular ad formats, creative technologies, and pricing methodologies and pretends they’re mutually exclusive. It’s given us ad vehicle numbers that simply are not credible.

There are several good ways to evaluate online advertising. You can break out revenues by ad placement, such as on-page ads, interstitials, and email. You can break out revenues by creative technology, such as GIF, rich media, and streaming media. You can look at payment methodology, like impression-based, pay-for-performance, and sponsorship deals. But you cannot — as the IAB does — mix all these variables together.

The report’s advertising-vehicle chart lists the percentage of ad dollars spent on nine often-overlapping vehicles: keyword search, display ads, classifieds, sponsorships, rich media, slotting fees, email, interstitials, and referrals. But how does the research classify an interstitial priced as a sponsorship? What about display ads targeted against keyword searches? What about email marketing that utilizes rich media? Believe it or not, the IAB asks its members to decide on their own how to categorize these, and other, forms of overlapping revenue. You can be certain different members categorize this money in different ways.

The IAB has a large, influential membership. It’s in a good position to report well-articulated industry data. That makes it even more disappointing to see it using a flawed methodology. Until the IAB and PWC fix that methodology, it’ll be hard to believe their numbers.

Ad Servers Report Ad Display Time

Meanwhile, two major ad servers are producing questionable numbers as well. Rich media companies such as Unicast and Eyeblaster have long reported on the average length of time their ads are displayed, a metric called “ad display time.” They reason the longer an ad is displayed, the stronger its brand impact on a user.

During the recent flurry of rich media development, ad display time trickled up to ad-serving heavyweights Atlas DMT and DoubleClick. Atlas believes in these perceived branding effects enough to call this metric the “Brand Exposure Duration.”

But there’s a problem with these numbers: None of the companies reporting ad display time has proven it’s an effective proxy for branding. Brand impact relies on a number of factors, most notably creative message quality and the ad’s relevance to its context. Although there’s little doubt ad display time plays some role in creating brand impact, presenting this metric as a proxy for branding tells an incomplete story and hurts advertisers’ ability to gauge the value of campaigns.

This industry needs a good, easy way for advertisers to understand the brand impact of their advertising. Ad exposure time doesn’t look like the answer.

Related reading

Overhead view of a row of four business people interviewing a young male applicant.