Many media buyers consider their work to be done once they’ve trafficked the last creative to the last site. But actually, at this point, the better part of the work is just beginning.
The degree to which we closely manage a campaign — by switching among different creatives and shifting media weight in accordance to real performance — determines the ultimate effectiveness of the dollars spent.
No matter how good our instincts are, a portion of the sites we choose and several creative executions will turn out to be turkeys. Likewise, several choices will turn out to have perfect synergies that lead to performance many times greater than the average. It happens with every campaign, and it’s very difficult to predict.
Agencies and clients that treat online campaigns like print campaigns — sending them out and waiting for results at the very end — are doomed to suffer from random inefficiencies. Those folks who are willing to put in place the staff necessary to monitor the ongoing information stream, however, stand to win big.
Buyers shouldn’t be fooled into thinking that this is an easy process. It’s not. The tasks involved are tedious and often thankless, and they can seem endless. But smart clients will pay an agency to ensure these tasks get done because the media savings far outweigh the staffing costs.
You Need More Than Click Data
The first thing to understand is that clicks, most often, are not a good indicator of performance. Sometimes click-through data is the only information we have aside from impression levels. Other times, like when our campaign objective is merely to drive site traffic, it is a good measure of performance. But many studies have shown that with sales and branding objectives, click-through is not a relevant measure of actual success.
For example, say your agency put together a three-site, four-week campaign that began running two weeks before Valentine’s Day for HotDates.com, a computer dating service. The first two weeks’ worth of click-through data shows that some sites are doing better than others and that one creative seems to be outperforming the other.
To simplify matters, we’ll look at just the relative performance of the two creatives within one site. The data initially shows that the “Cheap Service” creative outdoes the “Find Love” creative by about 33 percent. Looking only at the click data, Cheap Service got 20 clicks to Find Love’s 15 clicks in both media placements.
With only this Tier I data available, this is about as sophisticated an optimization process as you can do. As a result, we must turn to the Tier II and Tier III data to truly determine creative and media performance.
Looking at subsequent transactions, we see that the numbers reverse themselves. It turns out to be true that the Cheap Service creative pulls in more people, but the people who came from the Find Love creative turned out to be more qualified. When taking into account the number of people who participated in the trial period, we find that the creative previously thought to be less effective actually outperforms the other creative by a ratio of 2-to-1.
Typically, this effect gets further exaggerated as we look at actual purchases after a trial period. In this example, the Find Love creative earns three purchasers for every one that Cheap Service earns.
Responsible media agencies will ensure that the campaigns respond appropriately to these ongoing analyses. This requires some foresight. When the contract is drawn up between buyer and seller — typically in the form of the insertion order — clauses must be inserted to ensure that advertisers maintain their right to pull out of underperforming campaigns. Frequently, cash penalties and other terms will apply to these media retreats.
Even if the agency wishes to switch the creative or change the balance of media weight assigned to individual pieces of creative, this needs to be spelled out beforehand in the contract. Sites generally have a policy of forbidding frequent changes unless otherwise negotiated. Typical boilerplate contracts allow for one or two creative switches in the course of a monthlong campaign.
How Much Scrutiny Should Apply?
Only a minority of agencies currently apply this level of scrutiny to their clients’ media buys. Some ignore the potential benefits, but most either don’t have the staffing resources necessary, or they have difficulty getting the online advertising clients to pay for the additional staffing levels.
An important thing to remember is that if a client cannot provide at least Tier II data, then it’s generally not worth conducting the optimization process.
Obviously, the figures in our examples have been simplified. A real-world situation will involve additional factors, like differing media prices, overdelivering and underdelivering sites, and nonparallel creative tests across different sites. Each will require a slightly different reaction on the part of the analyst.
When to Go With Your Gut
Many slight data imperfections will prevent us from making hard claims about trends we think we see in the numbers. But sometimes we can be a little too prim when it comes to interpreting data.
In fact, when we know that one site has performed consistently with a similar type of audience in the past, we can frequently draw conclusions about the performance of a piece of creative across both sites. We do have to be careful. I know that I’ve been proven wrong about as many times as I’ve been proven right on these types of hunches. Some, though, can be bleedingly obvious.
If Sites Don’t Sell by Click
Over the course of several campaigns or, more often, the revision of a single campaign over time, agencies come to understand both the click-through rates of various media and also the subsequent value of those clicks to the client. Sites — whether or not they sell by cost per thousand impressions (CPM) or cost per click (CPC) — will find themselves dumped off the buys based on their CPC and cost-per-ad (CPA) performance. Everything might get translated back into CPMs just to make the site’s sales rep’s manager happy, but it’s all the same in the end.
I’ve found that using this argument on reps makes them much more amenable to allowing CPC and CPA deals. They know that if they become more attuned to the real metrics I’m using to evaluate sites, they stand a better chance of finagling the right placements for me at their site. If they brush me off with a load of impressions, I’m not likely to come back for more.
Buyers need be careful about CPC deals, however, since the click proves to be such a poor metric of campaign performance. Clicks are useful to buyers only when qualified users sit behind the clicks. The CPC deals must be constrained by targeting and a creative message to ensure some sort of audience quality. CPA deals do not suffer from this liability, as the actions tend to factor in audience-qualification factors.
Next week, we will finish off the performance data series with the fifth part of the trilogy. (My ClickZ editors asked if we should call it a “quintology,” and I responded that perhaps we could call it two-thirds of a “nonology.”) We’ll look at long-term analysis and data-ownership issues.