Filtering 'Bad' Traffic: For Best Results, Get Beyond Good and Evil

Is that questionable traffic segment worthless? Or merely worth less?

In some parts of the world, lengthy conversations are still being held on the subject of persuading clients to devote enough budget to digital. In light of past battles nearly won, it’s particularly maddening that some paid search campaign managers seem so bent on handcuffing their own accounts, that they are limiting their upside through a process of excessive filtering.

To be clear, it’s important to use a means of excluding unwanted traffic – such as keyword exclusions (negative keywords). But it’s also important that overall campaign strategy be driven by a game plan rather than fear or “best practices” hearsay. You’re in advertising, not corporate security. If you feel like your whole job is to keep “bad” clicks away from the website, chances are you’re over-filtering.

Some clients – indeed, more than half – will be timid and will go about trying new things in accounts slowly. And that’s fine.

A select few clients will be gunslingers, aggressive marketers who actually love to try new things.

But never, ever should the agency or expert over-filter on behalf of the client without being absolutely certain that the client is as conservative as one might assume.

In platforms like AdWords, we’ve been handed wonderful tools to get very granular in excluding certain keyword phrases and display network sources (and other segments) that are almost certainly bad bets to convert for the target market. From this simple principle inevitably grew overkill. Instead of focusing on the business reasons for filtering, some marketers focused on to-do lists (to look busy); exotic strategies (to look “advanced”); and scare tactics (to win business or to sell a new tool). And instead of seeing Google’s machine-learning capabilities in keyword match typing and display network placement (expanded broad match in search and automatic matching in the display network) as broadly positive developments with some negative elements that require hand-tweaking, some marketers have chosen to outright reject them and see only negative aspects.

And so the negative keyword lists and publisher exclusions lists grew. And grew and grew and grew. And sometimes they were misapplied to the whole campaign when applying them at the ad group level would have sufficed.

Sure! Powerful machine learning by the world’s largest technology company, using the world’s largest dataset, is 100 percent worthless! You should filter as much as you can by hand, and when that fails, get other computers involved to counteract Google’s computers, willy-nilly. You should make your account into one big filter.

Hmm.

As I see it, there are three main drawbacks to this over-filtering bias:

  1. You limit volume potential and total profit overall.
  2. Because you artificially create a narrower universe, but forget just how narrow you made it (and why), when it comes time to look for creative ways to expand that finite volume (like when the client asks for more, more, more), the “out of the box” means of boosting volume you come up with turn out to be worse than some good potential traffic that was right under your nose. (Specifically, “so-so” phrases that you’ve so hastily negatived out, or “so-so” publishers that you’ve excluded, might have served some purpose to the business – moreso than grasping at straws for unproven keywords or new, exotic channels.)
  3. What I like to call the “short leash problem.” When you try to anticipate and react to every possible poor-performing segment (and sub-sub-sub-segment), your analysis is actually getting too granular, and your assumptions, too causal. Mathematically, if you slice and dice everything enough, something will be coming in last place – often for no good reason. The upside of using a broader approach is that you keep your options open for random good luck. This approach may lead to more learning, and in the end, more volume and total profit.

Page 2

There may even be deep-seated reasons we get addicted to the short leash. Economists explain the behavior as “myopic loss aversion,” and it can affect investment returns.

Think of it this way. One day, you lost a mitten. When you’re five years old, that’s bound to happen. But for some reason, the adult brain sees this loss as a significant moral failing and a potential threat to the family’s future financial viability. You’d hear about it over and over again, with constant warnings to “never” lose a mitten again (thinking in terms of absolutes), or worse, be fitted with “idiot strings” to ensure the security of your personal hand-warming equipment (shaming). You’d think that after years of training, and in an adult scenario that involves a mandate for profit maximization, it wouldn’t be hard to drop the baggage. But it is! Too easily, “should” and “ought” creep into our decision-making in ways that aren’t synonymous with “the predicted return on investment.”

If you’ve ever tried to advise Google that it’s going about something in the “wrong” way, or asked it to define exactly what a valid or invalid click is, you know that Google and its computers don’t think in terms of good and evil. Catchy slogans (“don’t be evil”) are basically red herrings; they are not, in any shape or form, Google policy.

One way of looking at the Google world of data-driven success is to say that “Google is like a baby’s brain” (terms used by one Googler attempting to explain the company’s apparent managerial chaos). Systems are built to absorb and learn at a breathtaking pace, just by “taking it all in” and letting the “brain” do what it does best – compute, iterate, and develop more complexity in responses than could be possible through a deliberate effort to “plan.” In fact, the “baby’s brain” analogy is a compliment to Google, at least in moral terms. A baby is much more judgmental and discerning than a machine-learning system. As inhuman as it may sound, machine learning works at its breathtaking best when it’s free of moral baggage.

Take a concrete example. Why prejudge a certain publisher in the display network because it’s a “certain type of site”? Just let the machines run and cut off the non-performers at a predetermined point. It could be that you get 200 clicks on a “silly” travel site for the same price as you pay for 30 clicks on the “serious” one, so the two turn out to be equally good buys.

Similarly, you should avoid excluding keyword phrases that “might not be exactly” what is being searched for. What if they aid in research stage awareness, or convert occasionally? Exclude away if the data look pitiful. But please don’t leap into a priori negativing-out of phrases including things like “recipes,” “cheap,” “directions,” “software,” etc. just because these are slightly off your desired micro-intent. Try keeping them hanging around a little longer to see if they convert occasionally. Or try different ad groups, landing pages, and creative for different types of intent.

In some cases, you’ll make some amazing discoveries. We’ve discovered that searchers interested in high-volume orders actually use a variety of different signifiers, and they’re all seeking slightly different things (most of them being some form of bulk order). But at first glance, some of the words (“wholesale,” let’s say) appear to convert poorly. Until you solve the puzzle, the tight-leash, exclude-whole-hog mentality appears sound, but it doesn’t correspond well with the broader potential inherent in the search behavior.

To be sure, you’ll still want to use your human judgment to see patterns and to adjust slightly to taste. Just don’t overdo it. And try using rounds of lower bidding (signifying something that is worth less to you) rather than exclusions (signifying that the source is literally worthless to you).

This column was originally published on Aug. 12, 2011 on ClickZ.

Subscribe to get your daily business insights

Whitepapers

US Mobile Streaming Behavior
Whitepaper | Mobile

US Mobile Streaming Behavior

5y

US Mobile Streaming Behavior

Streaming has become a staple of US media-viewing habits. Streaming video, however, still comes with a variety of pesky frustrations that viewers are ...

View resource
Winning the Data Game: Digital Analytics Tactics for Media Groups
Whitepaper | Analyzing Customer Data

Winning the Data Game: Digital Analytics Tactics for Media Groups

5y

Winning the Data Game: Digital Analytics Tactics f...

Data is the lifeblood of so many companies today. You need more of it, all of which at higher quality, and all the meanwhile being compliant with data...

View resource
Learning to win the talent war: how digital marketing can develop its people
Whitepaper | Digital Marketing

Learning to win the talent war: how digital marketing can develop its peopl...

2y

Learning to win the talent war: how digital market...

This report documents the findings of a Fireside chat held by ClickZ in the first quarter of 2022. It provides expert insight on how companies can ret...

View resource
Engagement To Empowerment - Winning in Today's Experience Economy
Report | Digital Transformation

Engagement To Empowerment - Winning in Today's Experience Economy

1m

Engagement To Empowerment - Winning in Today's Exp...

Customers decide fast, influenced by only 2.5 touchpoints – globally! Make sure your brand shines in those critical moments. Read More...

View resource