If analysts told clients what they need to know (instead of what they want to hear), it would sound something like this....
Whether you're the marketer, publisher, or agency, you're likely still playing the online advertising "average metrics" game: What are our numbers, and what are the average?
As a researcher, much of my time is spent answering questions of averages: average click rate, average open rate, and so on. My business is client service, so I have to give customers what they want. As such, I usually provide clients with whatever average they seek.
For a change, I'd like to use this occasion not to provide what many want but, rather, what they need. Here for your future reference is a grizzled analyst's treatise on the maddening misuse of middling metrics:
If you want to be average, you already are. I don't want to sound too cynical, but the fact so many ask for average numbers and so few ask for best case numbers is more than a little revealing. It's OK to benchmark your work against an average, but it's not OK to do so exclusively. Find out what the best in your class do. Worry more about that than what the worker bees are up to.
A fib of omission. You performed way above average all right. How much did you pay for the performance? If you got twice the average performance rate at four times the average cost, you failed. If you must ask for an average, don't settle for just one! Make sure you get the average cost per metric for the given performance in your business category. There's a big difference between effectiveness and efficiency. That difference is often exactly equal to the difference between success and failure.
The value of nothing. Knowing what a click, registration, rating point, percentage point in lift, or sale costs is mission-critical. So's knowing what these are worth. This may seem remedial, but still a shockingly small minority of online marketers make distinctions between online acquisition and retention budgets. You may hit your cost per customer overall, but how do the numbers look when you separate repeat customers from new ones? You can't calculate an average fairly if you pad the numbers with repeat buyers, especially if your eye should be on a true cost per new customer.
Don't ask. Don't tell. Don't kid yourself. People are very selective when they're data consumers. I find the marketers who question how my averages are calculated are the ones whose numbers come in below those averages. Yes, preparing marketing data is a lot like making sausage. In the case of marketing data, everyone needs to know what he's swallowing. How media is paid for, how creative is delivered over time, weekdays on which a campaign runs, multichannel lift, and scale are all are significant factors in how campaigns perform and how performance averages are quantified. Stiffen your spine and hold your nose if you have to, but always make sure you know what's in that data.
Underachievement is in the eye of the beholder. Balancing effectiveness and efficiency are basic components of campaign optimization. Once you understand the dynamic that creates an optimal balance, you must create performance at scale. You should (with few exceptions) be willing to trade above-average performance for scale. If optimization efforts taught you where or what above-average performers are, spend what it takes to get a higher volume of desired results, even if that means surrendering overall efficiency and effectiveness. It's usually better to be average in bulk than super-effective/-efficient and puny.
The fastest horse and buggy. From time to time, we'd be well served to remember we're talking about a marketing medium that's not yet 10 years old. If even 25 percent of Internet hype should come to pass, history will view what we're doing now as the marketing equivalent of Atari Pong. It's fine to benchmark with averages how we do relative to others, but there are far more rules to be written than ones already etched in stone. Most of us would be well served if we created new performance paradigms with the same amount of energy we currently devote to keeping up with current models.
Rudy Grahn As a senior analyst with Jupiter Research, Rudy scrutinizes online advertising, marketing, and commerce including campaign measurement and analysis, interactive branding, Internet classifieds, and online performance marketing. Earlier, Rudy oversaw analysis and optimization at SFInteractive for clients including Microsoft bCentral, Adaptec, Verisign, Morgan Stanley Dean Witter Online, Red Envelope and Women.com. He was the founding creative and senior copywriter for i-traffic, where he worked on campaigns for CDNow, Disney, Eddie Bauer, BellSouth, Doubleday Interactive and CNN/SI, among others. Rudy's observations are solicited by The Wall Street Journal, Newsweek, ABC News, The New York Times, Reuters, C|Net Radio, Business 2.0, Wired, The Washington Post and American Demographics to name a few.