In a previous column, I discussed how limiting your exposure to “small stressors” and trial-and-error testing can lead to corporate fragility. This can be what happens when you stop testing, and cede optimization activities to black box campaign automation tools that optimize to “results” without you, the analyst, gaining an understanding of how these results are generated.
Of course, there must be a role for automation in marketing, just as there is a role for grocery stores and drive-throughs in satisfying our daily nutritional requirements. I’m not suggesting you head into the woods with a hunting knife every time your stomach rumbles, any more than I would suggest you should make 5,000 bid adjustments daily, one at a time. But just as eating breakfast, lunch, and dinner every day by cruising through the drive-through in your comfy SUV could shorten your life, a completely automated approach to marketing will cause your analytical abilities and corporate capabilities to atrophy. Getting the mix right is important.
Campaign automation tools can have some serious pitfalls. If you perform a postmortem on an account that had one or more campaigns running using, for example, Google AdWords’ “Conversion Optimizer” (hereafter referred to as “the system”), you may notice that the following serious flaws and strategic errors have crept into the mix. Whether you were aware of it or not, the following may have been happening in these campaigns:
- Look deeply into Search Query Reports, especially if you’ve made significant use of the broad match type. CPA and volume targets may have been achieved only superficially, by means of aggressively cannibalizing “easy pickings” conversions from other campaigns; notably, on brand terms. By “easy pickings,” I mean terms that you’re already optimizing for and getting low CPAs on in their own dedicated ad groups; typically high-converting phrases like your brand terms. Pulling these into another campaign doesn’t actually improve your account, it just assigns credit to the “system” for hitting targets you were already hitting.
- Even worse, if you didn’t set up the campaign structure to separate display from search, the system might have experimented wildly and wasted significant funds in the display network, making up the difference with low-hanging fruit on the search side.
- The system tends to get a few well-priced conversions to even things out for the weird experiments, but doesn’t aggressively pursue volume. So volume could be down by 15 percent, but since you were setting and forgetting you barely noticed the overall conservatism and lack of business dynamism in comparison with more agile and engaged competitors.
- New conventions and tacit knowledge around keyword optimization, keyword expansion, match type, etc. were never applied as the account was outdated and coasting. You stuck to a comfortable range of so-so performance on an outdated keyword set.
- The system knew which keywords to negative out, right? Usually. When it felt like it. Eventually. What a horrible waste to leave it to its own devices!
- You don’t like to sit in ad position 1 or 1.1. Yet the system had no compunction about this. The resulting overspend was gravy to the publisher. Even better, if several advertisers ran automation at once without being mindful of ad position, they collectively bid CPCs up a few notches! By contrast, if most everyone had tried to stay out of position 1 most of the time and actively managed their accounts so that they didn’t have too many position 1s in their account unless the return was sky-high on those keywords, the auction would have weakened and everyone would have enjoyed lower (and fairer) CPCs and better ROI.
- The system never aggressively pursued “goldmine” publisher partners by using managed placements when they were discovered, but instead judiciously fed you the odd conversion from such “plum” publishers here and there, and then kept right on watering down your success with too much of the weaker spray-and-pray inventory. Gotta be fair to all the advertisers, right? The system learns a lot in theory, but doesn’t hand over all the fruits of the learning to you. 🙁
- Even if there wasn’t a conservative bias to the bid calibration, you might find that the account only grew in line with industry growth or growth in search queries on this keyword universe. Given that competitors were actively tinkering and experimenting and becoming strengthened via trial and error (while aiming to hit CPA targets), you might well find that your account gradually got smaller…as did your market share.
- Nobody tested any ad creative for the time you ran things on autopilot. You’ve learned nothing on that front.
- The system ran a lot of the conversions through a limited set of (mostly broad-matched) keywords. You never got a feel for the true relative bids for different match types, many query avenues weren’t tested hard enough, and you haven’t learned much about user intent – but the owner of the system has.
- You haven’t built a comprehensive, increasingly predictable response dataset asset for your company through trial and error…but by inducing hundreds of thousands of companies to allow multiple forms of black box automation to run amok in their accounts, and using those companies as guinea pigs, Google has now built a gargantuan asset of this type…which it needn’t share with you.
- Sorry, but you weren’t allowed to set bid factors by country or region (as is now available under Enhanced Campaigns). You trusted the system to get the right bids for China, Pakistan, and Ireland…eventually. Trust? No. Verify? You must.
The good news: even if you were a victim of most of the above, you’re in way better shape than a company that relies primarily on SEO search query data in its analytics reports. If you pay for clicks, you still get lots of historical query data and lots of other great segments to optimize around, even when your strategy is passive or flawed.
Feel free to chime in with some of the things you may have encountered using automated tools.
Image on home page via Shutterstock.