PPC Ad Creative: Follow the Rules, Dammit!

In PPC, always leave space for continued refinement and new discoveries, but don't take this as carte blanche to deviate from million-dollar-proven rules, just for the sake of idle curiosity. Columnist Andrew Goodman explains.

Disclaimer: the following rant is directed only at myself… and any other brilliant and creative being with flesh, bones, and blood coursing through his/her human, fallible body.

black-friday-checklistHere’s a little-known secret for writing persuasive PPC ad copy. Stop trying to persuade people.

I know, I know.

The “creatives” won’t like it. Nor will “expert copywriters”. Those remain important roles in the world of advertising and marketing, in certain packed arenas. (Some of those arenas aren’t packed anymore, and the creative is being lavished on declining numbers of real eyeballs, but that’s a story for another day.)

But about half the lift you’ll get from successfully iterating in the arena of these tiny PPC text ads comes from understanding how to structure a test, combined with how to execute on the testing process. If you’re being generous, the other half of the results come from finding that Big Idea. Or two, or three. Those Big Ideas, of course, may not seem earth-shattering to people used to the notion that advertising should require the production budget of a short feature film. What counts as big in this context is simply “whatever got you the sale more often as measured against the cost of the clicks.” It’s pretty much that simple. It’s pretty much that quantifiable. There is very little room for debate when it comes to this type of ad testing.

Feel free to substitute your own KPIs here, of course. But if you’re giving an ad credit for high average order size, consider paying more attention to conversion rates in testing. Not in all cases, but in ad creative testing, average order value tends to normalize to a degree.

When it comes to pointless, subjective debate, testing means less of it. So go ahead, laugh and trash all the best practices someone may feel compelled to lecture you on when your results show otherwise. They: “You should send them to a more specific landing page.” You: “Not in this case. Look at the test results.”

PPC (and online information scent in general) thrives in creating response inside a special realm of consumer information seeking behavior. Evan Williams, co-founder of Twitter, recently advised digital visionaries to put aside their lofty expectations of the medium, in favor of a pragmatic view of the Internet as an “engine of convenience.” Rather than helping them find “new things,” Williams asserts that on the whole, “People just want to do the same things they’ve always done.”

Sure, you could point to counter-examples, but I have to say, when I look at how our clients acquire customers online, and the kinds of things those customers are looking for, this approach gets the job done.

In my book on AdWords, I noted that online marketers are divided into people like Evan (I refer to that faction as “plumbers” or “economists”) and people who typically preface their articles, ebooks, etc., with the word “persuasive…”

I refer to the latter as “persuaders” or “ideologues.” I feel that the former are usually getting more results than the latter. And I probably should have made that point more strongly.

Our job is, pretty much, to do the most efficient job of synchronizing with a searcher’s pre-existing intent. We can massage it and tilt it a little bit. Sometimes we can even use search to open a conversation, generate awareness, cookie a visitor for future remarketing, but not to the extent that many think.

So let’s cut to the chase.

Take a decent-sized account, at least three years old. You should already have arrived at some impressive, stable learnings. You’ve spent the money to arrive at two or three (or 20-30) Big Ideas about what triggers the best resulting ROAS number when you cobble together ad elements for any of the products or services being sold by the account. Best headline format, or a menu of two or three, for a given situation. A typical benefit statement OR USP that works. Whether or not to use an offer or put a price in the ad (personally, I think these devices are typically overrated, but it doesn’t matter what I think… test rigorously, check your data). Whether to use more or fewer words. Which call to action is best. Whether a simpler or more complex word is better. Whether a third party endorsement does it. Whether something really unusual you discovered works better than any of those things.

You should be able to boil this list down to 2-3 really strongly held beliefs about ad performance in that account that you would put into action confidently, if you didn’t have any other idea of what to write.
To boil that further down to its essence: there’s a chance that one simple ad template would beat the existing ads of different styles running anywhere in the account more often than it would lose. Or at the very least, in the case of ongoing tests with six or more ads, it would place at least second.
Maybe it’s a couple of ads, or three. But if you do this enough, you can probably point to one. You could probably easily recite to me the first line of body copy, all 35 characters of it, from memory.

I’ve written and relied on ads like that. Not only that, I’ve broken that powerfully successful first line of copy into three versions: a version that alludes to two benefits, a version that simplifies and only alludes to one of them, and a version that flips which order the two benefits are listed in. I’ve developed a conviction about the difference in performance between the version that puts the benefit order X+Y, as opposed to Y+X.

At this point, bear in mind that this is not all merely throwing random orderings or characters at a wall and hoping to find out what sticks. It’s not all just discovering performance gaps based on lucky accidents. In the case of two powerful benefits, when people are reading quickly, it’s quite likely that they respond differently to one benefit that is substantially different from another benefit, depending on which order they hear them in. If one is more powerful than the other, drop one to see what happens. Maybe one works better than two because it leaves more white space, or because it reduces cognitive load on the user’s brain. Regardless, those are reasons. That’s not just random crap on a page.

Different performance based on small but potentially significant differences in how a couple of benefits are worded? Most campaign managers wouldn’t even look.

Most probably consider those three to be roughly synonymous and they simply don’t keep track of the difference, let alone whether that overall way of writing the first line of body copy is so far out in front of other ideas that it merits a look in nearly every new ad written in the account.

Yet again, therein lies our advantage. If we’ve laid the groundwork in testing, then we’ve created powerful rules for how to go about things in expanding a large account with the greatest chances of success.

(Of course, you never foreclose the additional potential of new tests that are always running in the hopes of discovering the next, even bigger Big Idea. But beware of the trap outlined below.)

Another way to put that is, we’ve got ads (or parts of ads, or memes if you want) that are proven performers.

Let’s break that down even further to one word: proven. It’s tempting to view that as just a word. Maybe even an empty one.

To the business that has invested seven figures of ad spend in all this testing, the word proven isn’t an empty word. It’s an investment that’s already been made. It’s imperative that (a) there is something to show for it, and (b) that investment does not get squandered.

Let’s turn to (b). Why is it, when we have proven winning elements in our testing arsenal, that it is so common for account managers to break those rules? To try too many probably-losing contenders when building out new ad groups? To stubbornly insist on variety in the exercise, or product-specific “persuasion,” clever wordings, discounts and offers, and… well… just a bunch of random jumble that might work this time around?

Do Boeing and Bombardier build a few planes with weaker bodies and slower engines, just in case some people might like it? Sure, unfair comparison. As Jeffrey Hayzlett points out, we’re fortunate in marketing because when we err, typically no one dies. But you can see my point.

Perhaps it’s just human nature. In our profession, we do see ourselves as creative. Mavericks, rule-breakers and visionaries, etc.

We also–sometimes quite rightly–get caught up in the specificity of products and themes. Granularity, after all, is another powerful driver in the process of capturing high-intent buyers through SEM.

But if you deviate too much, too often, too capriciously from the rules you have put all those creative energies into building in the first place, then you’re not, in fact, respecting your very own creative process. And you’re not doing as good a job as you could be at leveraging that big testing investment.

“Always be testing” is a fair way to look at any form of marketing. Always leave space for continued refinement and new discoveries. But don’t take this as carte blanche to deviate from million-dollar-proven rules, just for the sake of idle curiosity, your own ego, or your tendency to work more randomly than others.

And, of course, you don’t have to launch new tests with a totally restrictive regime that sticks only to the “rules.” The rule-based creative simply need to be involved. Perhaps even featured. But the beauty of a testing-friendly platform is that proven winners still have to earn their keep. Here’s the thing: they usually will. So don’t muzzle their obvious brilliance by forgetting to put them in.

Subscribe to get your daily business insights

Whitepapers

US Mobile Streaming Behavior

Whitepaper | Mobile US Mobile Streaming Behavior

5y

US Mobile Streaming Behavior

Streaming has become a staple of US media-viewing habits. Streaming video, however, still comes with a variety of pesky frustrations that viewers are ...

View resource
Winning the Data Game: Digital Analytics Tactics for Media Groups

Whitepaper | Analyzing Customer Data Winning the Data Game: Digital Analytics Tactics for Media Groups

5y

Winning the Data Game: Digital Analytics Tactics f...

Data is the lifeblood of so many companies today. You need more of it, all of which at higher quality, and all the meanwhile being compliant with data...

View resource
Learning to win the talent war: how digital marketing can develop its people

Whitepaper | Digital Marketing Learning to win the talent war: how digital marketing can develop its people

2y

Learning to win the talent war: how digital market...

This report documents the findings of a Fireside chat held by ClickZ in the first quarter of 2022. It provides expert insight on how companies can ret...

View resource
Engagement To Empowerment - Winning in Today's Experience Economy

Report | Digital Transformation Engagement To Empowerment - Winning in Today's Experience Economy

4w

Engagement To Empowerment - Winning in Today's Exp...

Customers decide fast, influenced by only 2.5 touchpoints – globally! Make sure your brand shines in those critical moments. Read More...

View resource