First there was the click-through. Advertisers gauged the success or failure of their online efforts with a metric that represents the percentage of an exposed audience touched enough by a particular ad to impulsively respond to it.
Then we started moving to conversion rates (which are still something of an anomaly to more advertisers than I wish to mention). This measure at least gives you an idea of not only whether your advertisement was attractive but also whether the value proposition made on behalf of your particular product or service seems fetching enough, to enough of the right people, to elicit engagement on a level beyond impulse — registering for a newsletter, requesting more information, or actually buying the widgets you have up for sale.
Now we talk about branding effects, purchase intent, consumer attitude, awareness, buying power, and a myriad other expressions of ad effectiveness. All of these measures are used to give us an idea of which advertising is working and which isn’t. The reason we want to know this, of course, is so that we can do more of the kind that works and less of the kind that doesn’t.
But when all is said and done, there is still only one way to determine which among the tactics you have deployed actually has any effect (based on whatever metric for success satisfies the client’s objectives). There is only one way to do this in the online space, or in any media for that matter.
That “categorical imperative” (forgive me, Immanuel) of online advertising is testing.
When pressed, clients 9 times out of 10 will come to realize that the buy you put together for them needs to satisfy a direct response strategy. Why? Because their success (read your success) will be judged by a cost-per-action metric. This is not to say that clients are looking only at how many widgets were moved over the Web as a direct result of the advertising placed there; as more sophisticated advertisers with different kinds of sales cycles get on the Web, direct-to-consumer (DTC) sales attribution will not matter in quite the same way it has, and still does, with e-commerce entities. Even still, clients are going to want to know what their “cost-per-whatever” is as a function of their advertising spend.
Yet many times, sites continue to advance client proposals that seek significant monthly spending commitments above and beyond what seemingly should be the case for reasonable testing. Many of these sites are unproven, and, in this day and age of accountability, advertisers are expecting to learn from their advertising in ways they’d only been dreaming of before. What messages resonate with audiences? Which turn them off? What combination of placement and ad unit yields the most efficient cost per action? Which sites drive customers with the highest lifetime value? The list is long of all the things advertisers want to, and can, learn about their advertising and about those to whom they advertise.
Ad vendors have hawked their wares for years using the claim that their audiences are the best suited for an advertiser. In traditional media, when asked to prove this claim, planners and buyers were inundated with reams of research and subscriber studies to demonstrate the intrinsic value of a vehicle. Surprisingly, this was enough. But that isn’t good enough for a medium as accountable as the Web (and it won’t be good enough for traditional media, either, in the future). Advertisers need to be able to test the vehicle before buying into it big. They need to be allowed to minimize their financial exposure at the outset and learn whether a given vehicle works.
So when putting together a buy, pick the sites that have the audience you are looking for and come at a low cost per thousand (CPM). I understand cost isn’t always the reason to make a choice for or against a site, but you really want to minimize your exposure the first time out of the gate. If sites are performing well, bully for you! If not, well, at least it didn’t cost you a small fortune in branding-opportunity sponsorships to find out.
This approach is in everyone’s best interests:
- It’s good for advertisers because it retains their enthusiasm for the media at large due to its flexibility and utility. Making expenditures with some smarts doesn’t look too bad to a board or on a balance sheet, either.
- Sites win because if they indeed turn out to be for an advertiser what they said they’d be, the advertiser will be back next time — willing to spend more and for a longer period of time.
- Agencies win because they can aggregate more learning from a wider array of vehicles and advertisers and they can provide better service to the rest of their existing and future clients.
And all of this together will buttress the entire industry, coaxing the reluctant wallflowers out onto the dance floor, and ultimately generating more business for everyone.