Interpreting the IAB Measurement Guidelines

On November 15, 2004, the Interactive Advertising Bureau (IAB), American Association of Advertising Agencies (AAAA), Media Rating Council (MRC), Association of National Advertisers (ANA), World Federation of Advertisers (WFA), and numerous other industry organizations released new global measurement standards for online advertising. I was part of the Measurement Guidelines Task Force that established these standards, so I have a unique perspective on them.

There’s been a lot of misunderstanding around the measurement guidelines. Let’s start with the official name: “Interactive Audience Measurement and Advertising Campaign Reporting and Audit Guidelines.” It’s a mouthful, but it’s important to understand what the document aims to accomplish.

The document sets standards for audience measurement in online ad campaigns and for ad campaign reporting. It also establishes auditing guidelines for how various vendors and publishers should be audited, both to ensure everyone is doing things properly and to reduce discrepancies between publishers and third parties.

This all started because of Adam Gerber, who at the time chaired the AAAA’s Interactive Marketing & New Media Committee. Gerber was trying to resolve one of the industry’s biggest issues: constant discrepancies between publishers and third-party ad servers. This problem leads to significant work on all sides and has a huge affect on internal accounting processes because of the procurement guidelines most major advertisers are required to follow.

One of the first things this document tackles is the oft-disputed definition of an ad impression. It’s amazing it took so many years to establish the definition of our currency, but it’s now accomplished. Next, the document establishes the appropriate methods of counting impressions for publishers and third-party ad servers and a few related things, such as caching and robot and spider filtering.

The measurement guidelines require third-party ad servers and publishers to be audited. It defines the audit process. The guidelines also recommend which set of numbers to use if one party isn’t audited: if the third party is audited and the publisher isn’t, the third-party numbers should be used for billing. If both sides are audited, the publisher numbers should be used. This second point is left to final negotiation between the publisher and the agency/advertiser. Larger advertisers will likely have more negotiating power than smaller ones.

The guidelines seek to lower systemic discrepancies below 10 percent. They try to explicitly determine whose number is used for billing, and under what circumstances an investigation is warranted. If everyone goes through the excruciating auditing process the MRC is putting together, you can trust all the numbers will be as good as we (as an industry) can get them.

Complying with these standards will be expensive. Big publishers won’t have a problem as they’re already audited. This is just another layer of refinement to existing audits. But for startups that aren’t currently audited, this will be costly.

The good news is the auditing guidelines are comprehensive. When the publisher and third party follow the proper methodologies and are audited, discrepancies should be minimal. The exceptions, the outliers, are really at issue.

If the guidelines are followed, impression discrepancies between the publisher and third-party server should be under 10 percent. If an outlier event occurs (discrepancies higher than 10 percent), both parties should investigate. It’s highly unlikely either party would refuse, as this is a standard practice.

The outcome should be for contracts between publishers and advertisers to define which set of numbers to use for billing, so discrepancies shouldn’t hold up billing. But for extreme examples, say higher than 25 percent, this may require discussion. I’ve heard of discrepancies as high as 60 percent, which obviously should be considered extreme.

Perhaps the contractual language should require that in the case of discrepancies greater than 20 percent, the higher set of numbers will be used. Typically, true discrepancies lead to the “wrong” party having lower numbers, so this would be safe. I’ve seen legitimate cases as high as 20 percent, caused simply by Internet latency.

There are systemic problems and exceptions. The guidelines are designed to reduce these. You should see average discrepancies well under 10 percent once all parties comply. If there are specific problems with implementation or if someone has a network problem during a campaign (this happens across the board more often than people realize), those issues should be relatively easy for most publishers and third parties to sort out between themselves.

Related reading

nurcin-erdogan-loeffler_wikipedia-definition-the-future_featured-image
pwc_experience-centre_hong-kong_featured-image
12919894_10154847711668475_3893080213398294388_n
kenneth_ning_emarsys_featured-image
<