There has been much confusion about the changes to Google's Quality Score. Here's a look at how things have changed, how they've stayed the same, and what marketers should do going forward.
Recently, some clients and colleagues joined me for a custom seminar at Google Canada headquarters. While some of the group may have focused on the content, and others were struck by the quirky office features like the balcony mini-putt course and the DJ booth, I noticed the tone and presentation, right down to the Google font on the slide footers.
Google's name was now in white, and it felt somehow muted. Certainly, it felt like a departure from Google's "scrappy and earnest" image of days gone by. You wouldn't call it apologetic, though, so much as just blending in. "Shhh! Google's not here!"
Now that Google is everywhere, perhaps they're hoping you'll sort of forget they're in the room. Hey, it might work.
Google recently released a white paper entitled "Settling the (Quality) Score," intended to remind advertisers of just what Quality Score is, and what it isn't.
The paper is unexpectedly candid and seems to have caught some players unprepared. The tone seems to channel Rush's "Limelight," in which Geddy Lee has "no heart to lie," averring that he "cannot pretend a stranger is a long-awaited friend."
There are some substantive and valuable points in the paper, and probably - though we'll never get an official chronology out of them - mild evidence that Quality Score has evolved once again. The advanced advertiser will want to take note, as usual. What's also cool is that the rank-and-file advertiser is being asked to take third-party claims, and industry-water-cooler "Quality Score angst," with a grain of salt. We'll get to this stuff below.
First, though, the tone and style thing again.
Google urges advertisers to use Quality Score as a guide, but to avoid making it the "focus of account management." Part of the reason for that is that it is only an indicator, not a "detailed metric." (This more or less acknowledges that we only see rough summaries of Quality Score data, and we can only control Quality Score outcomes to an extent.)
Google amplifies this by reminding advertisers that reported Quality Scores never directly show us information related to:
So while you may get a better deal on a click, and better positioning, in one country as opposed to another, nothing about that will be reported back to you in an actionable way. While you may be able to boost click-through rate (CTR) (or elements of relevance that aren't CTR-related) of ads by testing various versions, no specific information will be provided that will tell you that one ad is better for your Quality Score than another ad within an ad group. And your "real" Quality Score may be very low on some queries triggered (non-exactly-matching) by your phrase or broad-match keyword, despite a visible aggregate Quality Score of seven or higher, for example. The upshot is that Google has a very complex formula to help it determine an ad's rank and eligibility (impression share), and it isn't showing you the lion's share of that information.
Quality Score is still pretty important, but Google reminds advertisers that it's going to be virtually impossible to manage to it, since reported information is not "detailed information." Looking at that number might give you a general sense of where you have room to improve, but that's about it.
"Pssst! Quality Score isn't really out here! It's working quietly in the background."
Quality Score has evolved continually from the days when it was fairly rudimentary, through to the "it's calculated for every query, and includes multiple factors" era post-2007, right up to now. Google, rightly, has continued to assess as much data as possible in order to show the most relevant queries, while maximizing revenue.
Account best practices haven't changed radically, even since the very first version of AdWords Select came out. For example, even before we had access to easy-to-use AdWords tracking code, testing ads was known to involve a delicate balance between high CTR (reported as a key factor in AdRank from 2002 on) and return on investment (ROI) (something we could measure by tagging creative versions, cookieing users, and looking at conversion data in a custom analytics tool, or even by using logfile-based analytics).
With many incremental modifications along the way - notably, developments such as ad extensions, and the growing subtlety of the formula that means keywords are never formally "disabled," etc. - those same basic principles have held true for 12 years.
In the old days, fixed minimum bids and "disabled keywords" were the bugaboo of advertisers. As the system evolved, those problems receded as relevance was enforced with what was mostly a sliding scale (although a Quality Score of one or two pretty much guarantees your keyword won't trigger ads). Yet through various iterations of Google's formula, it was pretty clear that accounts could be damaged by allowing too many "low-quality impressions" to rack up. There was some (undisclosed) element of "contagion" that would lead to you in essence paying a tax account-wide if you failed to build your account meticulously according to "granularity best practices" from the get-go. (See the Google AdWords Tourist Tax.)
I counseled advertisers from the very early days that "build it tightly first, then broaden out" was not only a valuable approach to piloting a new ad medium, it was virtually a must due to the way Google kept track of CTR and "bad stuff" in accounts. An established history seemed to be either good or bad baggage that could either trip up newbies or lead to major edges for savvy players.
Google began working to reduce such anomalies long ago. Now they are being seriously phased out. Good riddance.
How has Google managed to reduce "bad keyword and rotten architecture contagion" to near-nil, while still pleasing users? Undoubtedly through Big Data. But also by doing a good job at looking at the history (even short histories) of your own keywords. By taking into account its massive database of info from all accounts, plus your own account data - factoring in not just CTR and other relevance factors in ads and keywords, but also landing page user experience indicators, as checks on quality, Google can nearly always avoid showing ads on keywords that "should be QS 1 or 2" - even without very much direct history on such keywords. Plain and simple, Google can now give more of your keywords the benefit of the doubt, even if you're clumsy in your architecture, new to AdWords, or allow some stinkers to creep into the mix.
Like I said, Google tells us not to sweat Quality Score so much, in part because they don't show you enough detailed information to make it minutely actionable. Reported keyword Quality Scores, it's obvious, resemble the flickering shadows dancing on the walls of Plato's cave. False prophets will, of course, encourage you to optimize to the shadows.
Google continues to stand behind its Quality Score reporting as a handy guide to relevance, but otherwise, seems to be asking third parties to climb down from overly ambitious claims related to engineering accounts around Quality Score. Quality Score "experts" - that is to say, the very third parties who have touted their own tools' legendary abilities to "reverse engineer Quality Score," and the like - aren't having it. Their clever tactic seems to roughly amount to (1) tout Quality Score insight as lead generation tactic for their tool and/or agency services; (2) read Google's white paper; (3) ignore Google's white paper; (4) go right back to saying whatever it is they were saying before.
A quick search turns up, sadly, several players still singing the same old tune, advising advertisers to optimize their accounts against flickering shadows. But to again cite Neil Peart's lyrics in "Limelight," a better approach would be to "get on with the fascination" (pursue your fundamental advertising goals using all the metrics at your disposal, worrying more about profit and loss than unactionable scorecards), understanding that somewhere hidden in Google's back-end data, undisclosed to you, lies "the real relation, the underlying theme." Or perhaps not even Google knows.
August 10-12: Revolutionize your digital marketing campaigns at ClickZ Live San Francisco! Educating marketers for over 15 years, our action-packed, educationally-focused agenda covers every aspect of digital marketing. Early Bird rates available through Friday, July 17 - save up to $300! Register today.
Goodman is founder and President of Toronto-based Page Zero Media, a full-service marketing agency founded in 2000. Page Zero focuses on paid search campaigns as well as a variety of custom digital marketing programs. Clients include Direct Energy, Canon, MIT, BLR, and a host of others. He is also co-founder of Traffick.com, an award-winning industry commentary site; author of Winning Results with Google AdWords (McGraw-Hill, 2nd ed., 2008); and frequently quoted in the business press. In recent years he has acted as program chair for the SES Toronto conference and all told, has spoken or moderated at countless SES events since 2002. His spare time eccentricities include rollerblading without kneepads and naming his Japanese maples. Also in his spare time, he co-founded HomeStars, a consumer review site with aspirations to become "the TripAdvisor for home improvement." He lives in Toronto with his wife Carolyn.
US Consumer Device Preference Report
Traditionally desktops have shown to convert better than mobile devices however, 2015 might be a tipping point for mobile conversions! Download this report to find why mobile users are more important then ever.
E-Commerce Customer Lifecycle
Have you ever wondered what factors influence online spending or why shoppers abandon their cart? This data-rich infogram offers actionable insight into creating a more seamless online shopping experience across the multiple devices consumers are using.