Few issues divide the search engine marketing (SEM) community more than cloaking. One segment firmly believes it has the right to cloak content from users while the other strongly feels it’s a deceptive tactic. Get the two talking and tempers flare.
Complicating the issue is the fact though some search engines have guidelines against cloaking, arguably these same search engines still allow it or even practice it themselves. Just trying to agree on a definition of cloaking can lead to frustration.
Search engine marketer Alan Perkins hoped to clarify matters by publishing “Cloaking Is Always A Bad Idea.” Instead, the article renewed the debate, but perhaps in a constructive matter.
Below, a look at why people have traditionally cloaked, how XML feeds provide a form of approved cloaking, and why the bigger issue isn’t whether cloaking is permitted, but whether paid content is subject to more liberal acceptability rules.
Do Things for Humans, not Search Engines
Perkins is a firm believer in doing things for humans, not search engines. His view is people should make good content, then only tweak it to the degree search engines commonly encourage — writing good title tags or making pages easily found via site maps.
“Suppose search engines did not exist. Would the technique still be used in the same way?” he asks in “The Classification of Search Engine Spam.” It’s good advice I agree with. I have always strongly recommended people create excellent content, then do small, simple things that improve rankings. Perkins’s guidelines can be too restrictive, requiring people make search-engine-specific changes only with “written permission” (presumably in published guidelines).
Google recently updated its Webmaster guidelines, which offers lots of practical advice. But not everything is covered. Let’s say you have a very long page about buying a used car. One section might deal with places that sell used cars, the second covers negotiating a deal, and a third discusses how to inspect a car before buying. For search engine purposes, it’s wise to break that page into three different ones, so each new page is more focused on one subtopic.
Such an steps wouldn’t be done primarily for humans. Humans might prefer one big page. It’s such a subtle change I doubt any search engine would fault you for it. I think the “spirit of the basic principles” Google’s guidelines describe are still followed. You created good content not for Google but for humans. Breaking up the page is something you may have done to help Google index and rank that content. You offer the same good content to users.
Not Everyone Agrees With Search Engines
“Would I do this if search engines didn’t exist?” The question Google suggests you ask yourself in its guidelines is excellent advice and adheres to what Perkins preaches. It’s the same advice almost any search engine would give you on getting listed outside a paid inclusion program. How does all this relate to the cloaking debate?
Not everyone agrees with Perkins. Not everyone agrees with Google or other search engines’ guidelines. There’s always someone who feels his situation justifies doing something specifically for search engines, rather than humans.
Sometimes, justification involves search engines’ technical limitations:
- You can’t read Flash content, so I’m building a page just for you to index.
- I have a dynamic Web site you refuse to index, so I’m creating a static page that describes each product.
Other times, marketers will do whatever they feel is necessary to compete. “I know people are building pages specifically to please your algorithm and getting away with it, so I’ll do the same.” They know they may get caught, even banned. It’s a risk they take.
The Doorway-Page Dance
The “do whatever it takes” camp tends to be “doorway pages” practitioners. A doorway page is one targeting a particular search term. You tweak the title tag, meta tags, and body copy in a way you hope pleases the search engine’s algorithm.
The result is often a very ugly page you’d never want a human to see. I recently wanted information about “Thomas and the Magic Railroad” for my son and needed to go someplace other than the official fan site. On Google, I searched “thomas and the magic railroad fan site.” The last of the top pages listed was this:
… kinkaid – the princess bride movie three six mafia mp3 the offspring music
three six mafia photos the oreilly factor show thomas magic railroad three dog night …
The text is nonsensical. This page is simply a bunch of words. The person who created it hopes some of the words will somehow form a match that pleases Google. It’s not a sophisticated doorway-page attempt, but it worked for this extremely long query.
Doorway pages were popular in the late ’90s. They’ve declined for several reasons. Better link analysis use is a key factor. Google, along with the other major crawlers, needn’t depend only on a page’s content to know what it’s about. They analyze links to understand pages’ content and popularity. For popular queries, doorway pages have a much harder time succeeding outside paid-inclusion programs.
The emergence of paid-placement programs also impacted doorways. Lots of effort can be put into doorway pages, but there’s no ranking guarantee. Paid placement guarantees top rankings. It comes at a price, but it may be worth paying when measured against time spent on doorways and uncertainties.
Finally, paid inclusion provided a solution to reasons doorway pages were deployed. Pages with dynamic content that might be missed by some crawlers can now employ paid-inclusion programs to get indexed without going the doorway route.
Bring on the Cloaking
Traditional doorway pages are in decline, but they still exist. The problem remains: It’s not content you want users to see. As in the above example, a page of nonsensical content drives users away. That’s why doorway use is often accompanied by cloaking.
When cloaking, you show the search engine something other than what you show users. There are many ways to cloak. Those who are serious about it typically do what’s called IP cloaking. This means you know all the Internet addresses major search engines spiders use when they access the Web. If you see a request come from one of the known addresses, you deliver the custom content. Meanwhile, humans see something different.
The page in the example above used cloaking. When I visited it, the content wasn’t nonsensical. I got a simple, readable page with two links to product information about Thomas Kinkade on other sites. The person behind it no doubt earns affiliate fees from clicks off her page. No doubt, the page will soon be removed by Google. It has a specific cloaking ban and takes action against such pages.
Next, how cloaking does not equal spam.
There is of course a lot of discussion about content and what does and doesn't work online. Is long-form the key? Does short-form content have a role to play? Are there other factors at play?
There is still confusion over which search results are ads and which are organic, at least in the minds of some web ... read more