In Part 1 we discussed what Quality Score does to your business performance. Now I will reveal the big secret to Quality Score.
Google’s New Disclosure
Unless you’ve been living under a rock, you’ve been probing Google’s new Quality Score reporting to see expected CTR, ad relevance, and landing page experience on a variety of your keywords.
I’ll leave the latter aside for now; I dissected the CTR part of the equation in Part 1.
What is “ad relevance”?
I know Google is providing a jumble of bullet points in its help files, some of which feed into bad ad copywriting and shortsighted tactics. I’d just as soon break out of the box of guessing for its own sake, and propose a bold definition of “ad relevance.” Some of it might even sync well with how Google sees it.
Some of the answer is actually revealed when you post new keywords in an existing ad group and look at their initial Quality Scores (“Bam! 10! Why?”). And more of the answer is revealed when you try actual queries in real life to get a feel for what the search engine results pages (SERPs) look like to an end user.
The following cut at it won’t be strictly accurate, but if it can help cut through some myths and help you think more about the Holy Grail of search itself (true searcher intent), then I’ll have achieved my goal.
Ad relevance is not merely about “the relevance of your ad to your keywords and landing pages,” as if the entire issue is like keyword SEO circa 2002, getting all matchy-matchy with everything. Sure, matching things helps your campaign performance, and users do maintain persuasive momentum via strong information scent. But how does Google calculate this algorithmically? If it’s easy to game, it becomes worthless. Google won’t make it easy to game. So Google uses complex calculations resting on a rich foundation of big data. Google can measure concepts and complex behaviors. It’s not limited to a few isolated statistics.
Here’s the Big Secret
Ad relevance, I suggest, means: Should we show any ads to users on this kind of query; any ads at all? And how good an idea is it – from a searcher intent standpoint – to show your ad to this user? That’s it.
To elaborate, if it’s going to be a good idea to show that ad to nearly every user who types a query that potentially triggers your ad based on this specific keyword in your account, then that keyword’s Quality Score will be 10. The 10 reflects actual Quality Scores achieved across thousands of impressions, for mature keywords. It reflects mostly or entirely predictive data if it’s a new keyword. If it’s almost certain to be a horrible idea to show that ad to nearly every user who types in a query that might trigger that ad, then Quality Score will probably be around two.
How likely is it that “ad relevance” is to some degree a fluid concept in the reporting, thus more predictive when there is limited data for your keyword in your account, and more governed by actual CTR (supposedly a separate component) after tens of thousands of impressions have built up? I hope it is very likely. If users are falling all over themselves to click your ad in the real world, it’s hard to envision what logic would justify some higher-order relevance component that Google would still enforce to hold your ad back from showing. One concern might be that your ad could be misleading to users and causing an artificially high CTR, with users leaving your page in horror, but that point should be covered by the landing page experience component of Quality Score.
For Google, Quality Score is part of an interlocking set of algorithms that must determine page layouts – what types of content coexist on the SERPs – not just a single ranking of “10 blue links” or even a dual one with a fixed ad space and a fixed organic links space. Google has massive experience in judging which queries have the types of intent that make it worthwhile to show units in the mix including news, weather, shopping results, AdWords ads, videos, etc. With personalization, it’s getting even better at it.
For proof of this, notice how you can get an initial Quality Score of 10 on commercially oriented key phrases like “buy resistance bands,” “bulk logo squeeze toys,” and exact matches like [purple baked beans] in cases where similar tight-intent exact matches have obvious commercial intent and high CTRs either in other advertiser accounts or yours.
Then, run real-world search queries to confirm this point. On queries with commercial intent, many ads show on the page – some probably coming in through broad matching. On queries with clearly non-commercial intent, the lower Quality Scores on keywords being bid on by eligible advertisers may translate into less aggressive matching because Google is trying to match intelligently for the sake of the end user. In the accompanying screenshot, you can see how zero ads show up on “benefits of resistance bands,” but advertisers are all over the “buy” word for the same item. I’m betting that’s not all due to advertiser behavior (such as everyone being savvy with negative keywords). Quality Score, how match types work, and the interplay of several complex Google algorithms make the difference in what ads the user sees, if any.
The system isn’t perfect; that’s impossible given query ambiguity. In any case, in the “attention economy,” Google wants to treat user attention as a precious commodity. It wants users to see a lot of content where content is most relevant. It wants users to welcome ads when they appear, and not reflexively avoid all ads.
Sure, Google can tweak those dials to squeeze more revenue out of all advertisers in the aggregate. It’s not running a charity. But within this universe, advertisers who sculpt and hone their accounts to be as sensitive as possible to the nature of searcher intent will find themselves sailing through with lower CPCs than their more ham-fisted counterparts.
Whither Your 3s?
There remains special problems with some keywords and some industries, and Google continues to look into refinements to ease some of the pain. In the end, Quality Scores on “tough” (ambiguous or mass intent) keywords are not a report card, nor any type of judgment that you’ve failed. They’re simply benchmarks of intent matching. In a future column, I’ll check back on how things are going with Quality Scores and strategies in B2B and other challenging areas.
This column was originally published on August 10, 2012.