It’s surprising how often media teams feel forced to “wing it” when putting together plans to spend millions of dollars in online advertising. Clients are in a rush to get campaigns going and start seeing results, so media teams often are forced to pull the trigger before they’re happy with the level of time and energy spent planning and experimenting, and they have to compromise and be pragmatic in the face of limited time and options. Here are two approaches to consider:
Test and Control Groups: Your Misunderstood Friends
A common thing agencies do is run multiple ad networks and publishers on a campaign, and compare results. This feels virtuous, experimental, and scientific, but often it’s hard to derive clean conclusions. Usually, the campaign includes retargeting partners, behavioral partners, maybe some social or contextual partners, and some major portals. All of these buys are live concurrently and on the same population, so it’s hard to tell at the end of the day who really did what. Last-click or last-view attribution is better than nothing, but obviously worse than running a clean experiment. You can always wonder, for example, “what if we hadn’t been running the portal buy — would it have made our retargeting partners look worse, because it would have reduced the population available for retargeting?”
The odd thing is that it’s not challenging to segment an online audience. For example, if you want to test two retargeting partners, you can randomly cookie users with a test cell label A or B. You can set up your Web site to include ad network A’s retargeting pixel on test cell A and ad network B’s pixel on test cell B. This gives you a chance to see what it would be like if you switched all of your retargeting to just partner A or just partner B, without having them compete against each other for the same consumers on exchanges.
All sorts of good comes from test and control groups. For example, one advertising client of my company was very skeptical about view-through conversions from retargeting. Surely, these consumers would have converted anyway, they thought. So, we simply did a clean A/B test where group A (the “test” group) saw the company’s ad, group B (the “control” group) saw a random ad, and we measured the resulting conversion rates. If you’re reading this column I assume you already believe in the effectiveness of display advertising, but the customer was amazed at the lift. Since it was a clean experiment, there was no other hypothesis that could explain the difference in conversion rates, and we all agreed that the only possible conclusion was that the display campaign had been effective.
Test and control experiments are an amazingly simple and under-utilized tool to facilitate fact-based decision making. It’s often a fairly small step from “quick and dirty” experiments to “quick and clean” experiments that yield much more informative results.
Measure your Objectives and Hold Advertisers Accountable
In the previous generation of online advertising, it was safe to just buy inventory in bulk from portals and assume one impression has about the same characteristics as another, on average. Today’s ad networks are more of a precision buy, so if you tell your ad network partners that you have an objective (say, a CTR (define) or CPA (define) or brand metric) they will strive to hit it. In this new world of precision buys on exchanges, it’s not safe to assume that multiple metrics will correlate the way they used to when you did broad portal buys.
For example, say you’re running a campaign for a CPG company whose goal is to get consumers to download a coupon. It’s common that the client and their tech partners will either view it as an imposition to place pixels on the coupon download page, or they’ll agree to do it but only get it done a week before the campaign is finished. In this kind of scenario it can be tempting to just fall back to some proxy metric like CTR and assume this campaign’s click-to-action rate will be the same as last year’s. Don’t do it! There’s a huge difference between optimizing to clicks versus optimizing to actions. If you’re working with partners and you tell them you want clicks, you’ll get clicks, but in their attempt to optimize the click metric it’s likely that other metrics will suffer. In cases like this it’s important to be persistent and persuasive with the advertiser about the need to measure results. If it’s truly impossible to measure the desired actions, you need to be sure the client agrees to the proxy/substitute metric and understands that performance on their real metric may vary.
The mantra from agencies to their clients should be “if we can measure it, we can optimize it,” and the corollary is “if we can’t measure it, we can’t reliably optimize it.” Advertisers need to know that the difference between a successful million-dollar online campaign and a failed million-dollar online campaign can be decided by whether or not their information technology or Web teams acted on requests to enable measurement of campaign effectiveness, which can be as simple as putting a pixel on a page.
And a Final Note
Online display advertising is a world fraught with opportunity and peril, and I hope I’ve been occasionally useful in pointing the way towards the former and away from the latter. This is my last ClickZ column, but I hope I can continue to be useful to you. Feel free to drop me a line.
Emily Ma, product director of Tencent’s advertising platform products department, was a keynote speaker at ClickZ Live Shanghai where she discussed the ... read more
Nurcin Erdogan Loeffler, head of strategy and innovation, Vizeum China, outlines the seven ways businesses can future proof their digital strategies.
Every brand would love to see its hashtag trending on social media, but what if it’s for the least expected reason? Should you ... read more
In today's multichannel world how can marketers use data to ensure the experience a customer receives is relevant to them?