Although both have their fair share of great minds, there are times when interactive advertising agencies and the publishers they work with simply don’t think alike. Unfortunately, this is most often the case when measuring the effectiveness of their clients’ ad campaigns is at stake.
Advertisers would surely like to think we’re all on the same page when it comes to gauging a campaign’s success, and given how much a harmonious opinion on this can influence — from campaign optimization to future media buys — we really should all have our clients’ best interests in mind. Yet we each have our own unique attitude, and that can slit a hole in even the most airtight of media plans.
We see some evidence of this divide in the way we approach audience data. Publishers offer us their internal data, and we get a second opinion from our own third-party partners like comScore and Nielsen//NetRatings. When the data doesn’t match up, it can make for some tricky planning and pretty volatile agency and publisher negotiations.
More problematic than that, however, is the way agencies choose to rate the sites they work with, and the data resulting from the campaigns that they run with those publishers. Some agencies use an efficiency ranking system to determine the potential effectiveness of a media environment, and apply that with blanket coverage to all of their partner sites.
The issue lies with that fact that not all sites are created equal. Let’s use an in-market automotive research site as an example. Such sites are designed with one objective in mind: to sell cars. Pages are laid out in a fashion that incites users to gather more information. Content is presented in a way that keeps them on the site, delving deeper into the vehicles that most interest them.
The objective, from the publisher’s perspective, is to encourage consumers to push themselves further down the purchase funnel. Naturally, the automotive advertiser is all for this plan. Theoretically, its ad agency should be as well.
To apply a typical efficiency score to these sites is to interfere with what would otherwise be great synergy. Let’s assume the agency’s internally developed rank incorporates such factors as number of clicks and post-click actions.
Another advertiser might benefit from this system, as it would let its media buyers know which sites are most likely to meet its needs of generating visits to its brand site, or increasing online sales. For that automotive advertiser, however, it would prove inappropriate and hazardous when used to analyze the value of a potential site buy.
Just as clicks aren’t the primary concern for the auto research site publisher, they aren’t the goal for the advertiser, either. Can an agency that implemented a system that automatically extends to all of its media partners be expected to assess how it might affect each of thousands of individual sites?
The same applies to any in-market site, regardless of the industry. Agencies, and their clients, are focused on the arbitrary act of getting clicks, whereas the publisher is focused on selling products. These two objectives are diametrically opposed; to make matters worse, these scoring systems are rarely set up to capture the offline information that’s so critical to the client.
Publishers that are on top of their game are well aware of such efficiency scores, and proactively point to their flaws before they are erroneously cut from a media plan. We can’t rely entirely on their attentiveness, however, when we’re the first stop for campaign accountability with our clients. If our clients aren’t actively demanding different efficiency scoring systems (and different ad creative, for that matter) for in-market sites versus lifestyle sites, ad network buys, and so on, it’s up to us to make this distinction.
As in many aspects of media planning and buying, it all comes down to the pros and cons of automation.
Certainly, planning software, audience measurement tools, and data services have greatly improved our ability to develop campaigns that, on paper, appear virtually flawless. Site user profiles are perfectly matched to advertiser targets; ad placements are paired with units that fit with each advertiser’s objective. When it works, it works wonderfully well, but it would be reckless to remove a human element from the process and turn our work entirely over to a machine — particularly when these tools are having us thinking in terms of online actions instead of tangible sales.
There needn’t be such discord between agencies and publishers if media buyers would dedicate more time to each individual campaign, and to uncovering just these sorts of discrepancies. As our options become more technically advanced, we might consider doing something completely ironic and go back to basics.
Election 2016 is already like no presidential race before it, and one of the most striking aspects of this year’s race is the disparity ... read more
Video consumption keeps increasing and Facebook is serious about a video-first world, encouraging us all to explore its full potential. Ian Crocombe, ... read more
Mike Andrews Ph.D is Chief Scientist (Forensiq) at Impact Radius, and is carrying out some fascinating work around digital marketing and ad ... read more