How I Use ACE to Be the Geraldo Rivera of PPC

Put conspiracy theories about AdWords Campaign Experiments to bed with these three cases.

Investigative journalism: some view it as merely casual entertainment; to others, it represents the beating heart of truth, justice, and freedom.

From Geraldo Rivera to Dateline; NBC to Malcolm Gladwell, there’s nothing quite like shining the cold bright light of reality onto an ugly situation or elusive truth.

Why am I driven to be more like Geraldo? Maybe it’s Movember. Maybe it’s that we seem to have a lot in common. According to Twitter, we both follow Jerry Seinfeld, Barack Obama, and Craig Ferguson.

If you’re into keyword advertising, welcome to your new means of blowing the lid off any murky claim, catching the ROI thieves red-handed, or even uncovering a nasty conspiracy to befuddle you: it’s AdWords Campaign Experiments (ACE).

ACE has been out for a while now. It offers a means of forcing a split stream of traffic on dimensions you previously couldn’t test directly. There are quite a few amazing uses for ACE. Probably the best part is, Google doesn’t coach you on how to use it. You’re free to come up with surprising uses of your own.

If you’re hearing a constant din of conspiracy theories and hearsay flying around elements of your marketing, what better way to put them to bed than to simply use the scientific method with a control group, an experiment group, and a series of target metrics to compare?

Here are three experiments I’ve been playing around with recently.

Case No. 1: Lured by an ROI Predator – The High-Bid Fail

Beware the siren song of the Bid Simulator. If you look at Google’s graphical representation of the potential increase in premium position impressions, you can generate from a keyword if you bid much higher on it; you might easily be tempted to go after more volume under the assumption that the conversion rate should stay relatively constant, so you’ll have a solid shot at an increase in total profit.

The temptation only increases as the HiPPO (highest paid person’s opinion) does frequent searches on core keyword terms, and sees herself sitting in second and third place behind an obviously “inferior” competitor. So eventually you’re lured into bidding more.

Prior to ACE, we’d feel our way around with tests in serial order, since you couldn’t easily A/B test a bid level on a single keyword (or group of keywords) and handily analyze the results.

In this case, an art company wanted to be in premium position more often; but would it be a profitable gambit?

We ran half the traffic on a set of closely-related keywords at a much higher bid. The ACE reporting screen shot here shows a terrible result: despite the actual average CPC rising only 20.3 percent, the change in ad positions led to much lower conversion rates (from 2.24 percent to 1.40 percent), much more traffic, and a much worse return on ad spend. This ego bid seems likely to benefit only Google.

goodman-data-1

We learned that lower-budget customers who see premium-positioned ads are expecting discounts for average quality products; when they find a high quality offering instead, they don’t buy.

We now know the exact result of splitting the traffic between high and low bids on this set of keywords.

Case No. 2: Fear of Heights

Will a one-word broad match keyword help your business soar, or will it throw you off a cliff?

In this case, a popular powdered herb with claimed health benefits was lacking the one-word broad match in the ad group. This was by design, as an extensively-built-out ad group should provide nearly 100 percent of your potential reach without engaging in the kind of muddy targeting that can result from one-word broad matches.

Still, we wanted to add that broad match term to see what would happen. What retailer can resist the promise of additional reach and more growth?

Problem: simply layering another keyword variant over existing keyword patterns tells you little about the incremental value the broad match keyword brings. Broad match forms have a nasty habit of claiming all kinds of credit for sales they are simply cannibalizing away from other keywords in the group.

Solution, using ACE: add this single term into the ad group as an “experiment only” keyword. Then, after a time, view the statistics for the overall ad group. The stats for the control version of the ad group should portray the workings of that ad group as if that keyword were never added. The other will show you what happens in the aggregate when it’s added. Comparing the two streams means comparing group performance as a whole using two versions of an ad group, as opposed to erroneously crediting a new keyword with success that came only from cannibalization.

We added that keyword cautiously, at a modest bid. It still managed to claim credit for numerous conversions. Yet the two versions of the ad group as a whole were virtually tied in terms of the number of conversions, but the experiment version had a worse CPA. That’s in line with what we had predicted.

goodman-data-2

The main thing we illustrated was that crediting broad match forms of keywords with “independently” converting to sales just leads to confusion. Ideally, those forms of your keywords would get relatively little credit in the overall scheme of things as you build out and track more specific phrases and exact matches.

We’ll be continuing to run such tests using different techniques and bid levels.

We could simply dismiss the power of the broad match form based on this first test, but we’re not satisfied merely with pretty-looking CPAs. The quest for volume warrants further experimentation.

Case No. 3: The Offer They Couldn’t Refuse

Free shipping offers cost money, so it’s important to see a clear lift from them. On a printing company’s highest-margin product, we wanted to understand whether a free shipping offer helped or hurt financial results when all was said and done. In my experience, companies often don’t test these theories rigorously enough. And they shouldn’t have to drop all ongoing testing to gain an understanding of this single testing dimension.

Enter ACE: it makes experiment interpretation much easier. In our test, for one ad group, we simply tagged all the non-shipping-offer ads as “control only” and all the shipping-offer ads as “experiment only.” One nice feature of ACE: you can force ad serving to behave in a way that’s conducive to a specific type of test you’re running. No more wondering about broken ad rotation that “forgets” to rotate some of your ads “more evenly.”

Without ACE, we might have talked ourselves in circles on this one.

The glib view of this – your initial starting point – may well be that shipping offers and time-limited promotions always work, are a must, etc.

Stage two is the second-guessing stage. When you start experimenting with offers, something plays tricks in your head. You start cheering for your legacy ads – after all, they were well-crafted, and they don’t cost you a dime in free shipping. If you’re using time-limited offers, buyers often wait a few days to pull the trigger, so your offer-based ads are “losing” at first and seem to magically pull into the lead a few days later – often when you’re busy with something else. Without utterly clear reporting, it’s common to give up on these ideas too easily.

By forcing a definitive test, we were able to turn our subjective view of a losing strategy into clear proof of a winning strategy. The cost of the shipping still needs to be factored in, but for a one-month test, our metrics are pretty clear.

For the control (non-offer) group, the CPA was $98.03 vs. $69.49 for the experiment (offer) group. The conversion rate for the non-offer group was 2.15 percent vs. 3.08 percent for the offer group. And even the return on ad spend (ROAS) was clearly better for the offer group, at 2.77 vs. 2.37. This last point was particularly important; in some tests, we find that the offer-induced clicks attract only small purchasers. Not the case here.

All I have now is an appetite for more such tests. Warning: they can be as addictive as an all-day “Columbo” marathon.

Subscribe to get your daily business insights

Whitepapers

US Mobile Streaming Behavior
Whitepaper | Mobile

US Mobile Streaming Behavior

5y

US Mobile Streaming Behavior

Streaming has become a staple of US media-viewing habits. Streaming video, however, still comes with a variety of pesky frustrations that viewers are ...

View resource
Winning the Data Game: Digital Analytics Tactics for Media Groups
Whitepaper | Analyzing Customer Data

Winning the Data Game: Digital Analytics Tactics for Media Groups

5y

Winning the Data Game: Digital Analytics Tactics f...

Data is the lifeblood of so many companies today. You need more of it, all of which at higher quality, and all the meanwhile being compliant with data...

View resource
Learning to win the talent war: how digital marketing can develop its people
Whitepaper | Digital Marketing

Learning to win the talent war: how digital marketing can develop its peopl...

2y

Learning to win the talent war: how digital market...

This report documents the findings of a Fireside chat held by ClickZ in the first quarter of 2022. It provides expert insight on how companies can ret...

View resource
Engagement To Empowerment - Winning in Today's Experience Economy
Report | Digital Transformation

Engagement To Empowerment - Winning in Today's Experience Economy

2m

Engagement To Empowerment - Winning in Today's Exp...

Customers decide fast, influenced by only 2.5 touchpoints – globally! Make sure your brand shines in those critical moments. Read More...

View resource