Quality Score, We Hardly Knew Ye

There has been much confusion about the changes to Google's Quality Score. Here's a look at how things have changed, how they've stayed the same, and what marketers should do going forward.

Recently, some clients and colleagues joined me for a custom seminar at Google Canada headquarters. While some of the group may have focused on the content, and others were struck by the quirky office features like the balcony mini-putt course and the DJ booth, I noticed the tone and presentation, right down to the Google font on the slide footers.

Google’s name was now in white, and it felt somehow muted. Certainly, it felt like a departure from Google’s “scrappy and earnest” image of days gone by. You wouldn’t call it apologetic, though, so much as just blending in. “Shhh! Google’s not here!”

Now that Google is everywhere, perhaps they’re hoping you’ll sort of forget they’re in the room. Hey, it might work.

Google recently released a white paper entitled “Settling the (Quality) Score,” intended to remind advertisers of just what Quality Score is, and what it isn’t. 

The paper is unexpectedly candid and seems to have caught some players unprepared. The tone seems to channel Rush’s “Limelight,” in which Geddy Lee has “no heart to lie,” averring that he “cannot pretend a stranger is a long-awaited friend.”

There are some substantive and valuable points in the paper, and probably – though we’ll never get an official chronology out of them – mild evidence that Quality Score has evolved once again. The advanced advertiser will want to take note, as usual. What’s also cool is that the rank-and-file advertiser is being asked to take third-party claims, and industry-water-cooler “Quality Score angst,” with a grain of salt. We’ll get to this stuff below.

First, though, the tone and style thing again.

Google urges advertisers to use Quality Score as a guide, but to avoid making it the “focus of account management.” Part of the reason for that is that it is only an indicator, not a “detailed metric.” (This more or less acknowledges that we only see rough summaries of Quality Score data, and we can only control Quality Score outcomes to an extent.)

Google amplifies this by reminding advertisers that reported Quality Scores never directly show us information related to:

  • Country
  • Differences in the quality contributions of different ad creative versions in an ad group
  • Non-exact-query matches (reported Quality Score only addresses exact query matches)

So while you may get a better deal on a click, and better positioning, in one country as opposed to another, nothing about that will be reported back to you in an actionable way. While you may be able to boost click-through rate (CTR) (or elements of relevance that aren’t CTR-related) of ads by testing various versions, no specific information will be provided that will tell you that one ad is better for your Quality Score than another ad within an ad group. And your “real” Quality Score may be very low on some queries triggered (non-exactly-matching) by your phrase or broad-match keyword, despite a visible aggregate Quality Score of seven or higher, for example. The upshot is that Google has a very complex formula to help it determine an ad’s rank and eligibility (impression share), and it isn’t showing you the lion’s share of that information.

Quality Score is still pretty important, but Google reminds advertisers that it’s going to be virtually impossible to manage to it, since reported information is not “detailed information.” Looking at that number might give you a general sense of where you have room to improve, but that’s about it.

“Pssst! Quality Score isn’t really out here! It’s working quietly in the background.”

The Case for “Nothing Has Changed”

Quality Score has evolved continually from the days when it was fairly rudimentary, through to the “it’s calculated for every query, and includes multiple factors” era post-2007, right up to now. Google, rightly, has continued to assess as much data as possible in order to show the most relevant queries, while maximizing revenue.

Account best practices haven’t changed radically, even since the very first version of AdWords Select came out. For example, even before we had access to easy-to-use AdWords tracking code, testing ads was known to involve a delicate balance between high CTR (reported as a key factor in AdRank from 2002 on) and return on investment (ROI) (something we could measure by tagging creative versions, cookieing users, and looking at conversion data in a custom analytics tool, or even by using logfile-based analytics).

With many incremental modifications along the way – notably, developments such as ad extensions, and the growing subtlety of the formula that means keywords are never formally “disabled,” etc. – those same basic principles have held true for 12 years.

What Has Changed?

In the old days, fixed minimum bids and “disabled keywords” were the bugaboo of advertisers. As the system evolved, those problems receded as relevance was enforced with what was mostly a sliding scale (although a Quality Score of one or two pretty much guarantees your keyword won’t trigger ads). Yet through various iterations of Google’s formula, it was pretty clear that accounts could be damaged by allowing too many “low-quality impressions” to rack up. There was some (undisclosed) element of “contagion” that would lead to you in essence paying a tax account-wide if you failed to build your account meticulously according to “granularity best practices” from the get-go. (See the Google AdWords Tourist Tax.)

I counseled advertisers from the very early days that “build it tightly first, then broaden out” was not only a valuable approach to piloting a new ad medium, it was virtually a must due to the way Google kept track of CTR and “bad stuff” in accounts. An established history seemed to be either good or bad baggage that could either trip up newbies or lead to major edges for savvy players.

Google began working to reduce such anomalies long ago. Now they are being seriously phased out. Good riddance.

How has Google managed to reduce “bad keyword and rotten architecture contagion” to near-nil, while still pleasing users? Undoubtedly through Big Data. But also by doing a good job at looking at the history (even short histories) of your own keywords. By taking into account its massive database of info from all accounts, plus your own account data – factoring in not just CTR and other relevance factors in ads and keywords, but also landing page user experience indicators, as checks on quality, Google can nearly always avoid showing ads on keywords that “should be QS 1 or 2” – even without very much direct history on such keywords. Plain and simple, Google can now give more of your keywords the benefit of the doubt, even if you’re clumsy in your architecture, new to AdWords, or allow some stinkers to creep into the mix.

“There You Go Again,” Dept.

Like I said, Google tells us not to sweat Quality Score so much, in part because they don’t show you enough detailed information to make it minutely actionable. Reported keyword Quality Scores, it’s obvious, resemble the flickering shadows dancing on the walls of Plato’s cave. False prophets will, of course, encourage you to optimize to the shadows.

Google continues to stand behind its Quality Score reporting as a handy guide to relevance, but otherwise, seems to be asking third parties to climb down from overly ambitious claims related to engineering accounts around Quality Score. Quality Score “experts” – that is to say, the very third parties who have touted their own tools’ legendary abilities to “reverse engineer Quality Score,” and the like – aren’t having it. Their clever tactic seems to roughly amount to (1) tout Quality Score insight as lead generation tactic for their tool and/or agency services; (2) read Google’s white paper; (3) ignore Google’s white paper; (4) go right back to saying whatever it is they were saying before.

The Universal Dream

A quick search turns up, sadly, several players still singing the same old tune, advising advertisers to optimize their accounts against flickering shadows. But to again cite Neil Peart’s lyrics in “Limelight,” a better approach would be to “get on with the fascination” (pursue your fundamental advertising goals using all the metrics at your disposal, worrying more about profit and loss than unactionable scorecards), understanding that somewhere hidden in Google’s back-end data, undisclosed to you, lies “the real relation, the underlying theme.” Or perhaps not even Google knows.

Subscribe to get your daily business insights

Whitepapers

US Mobile Streaming Behavior

Whitepaper | Mobile US Mobile Streaming Behavior

5y

US Mobile Streaming Behavior

Streaming has become a staple of US media-viewing habits. Streaming video, however, still comes with a variety of pesky frustrations that viewers are ...

View resource
Winning the Data Game: Digital Analytics Tactics for Media Groups

Whitepaper | Analyzing Customer Data Winning the Data Game: Digital Analytics Tactics for Media Groups

5y

Winning the Data Game: Digital Analytics Tactics f...

Data is the lifeblood of so many companies today. You need more of it, all of which at higher quality, and all the meanwhile being compliant with data...

View resource
Learning to win the talent war: how digital marketing can develop its people

Whitepaper | Digital Marketing Learning to win the talent war: how digital marketing can develop its people

2y

Learning to win the talent war: how digital market...

This report documents the findings of a Fireside chat held by ClickZ in the first quarter of 2022. It provides expert insight on how companies can ret...

View resource
Engagement To Empowerment - Winning in Today's Experience Economy

Report | Digital Transformation Engagement To Empowerment - Winning in Today's Experience Economy

4w

Engagement To Empowerment - Winning in Today's Exp...

Customers decide fast, influenced by only 2.5 touchpoints – globally! Make sure your brand shines in those critical moments. Read More...

View resource