There’s something wrong with much of the language we use in the SEO (define) industry. Take the ubiquitous industry misnomer itself: “search engine optimization.” Who in this industry has ever optimized a search engine, other than a search engine? And there’s “SEM” (define). Who has ever marketed a search engine, other than a search engine?
Certainly, we optimize Web pages to ensure they can be more easily crawled and indexed by search engines. And we buy advertising at search engines to promote our clients’ products and services. But our industry sector titles don’t seem to properly describe those exact practices.
Here’s another phrase that gets bandied about a lot: “reverse-engineer” (define). A lot of potential clients ask me if I can reverse-engineer the algorithm the same way their previous or existing firm does.
Generally speaking, you reverse-engineer something to recreate it in some form or fashion. It’s all about analyzing systems and components and their interrelationships to come up with something else that does the same thing.
Correct me if I’m wrong, but no one in this industry has reverse-engineered a search engine algorithm and recreated it. If someone had scientifically reverse-engineered Google’s algorithm, all 100 or more factors and elements, and now claims to know exactly what the “secret sauce” for guaranteed ranking is, we’d hear about it.
Certainly, many in the industry are obsessed with search engine crawler activity. But you only need a basic log-file analyzer to get a handle on that. It’s not difficult at all to figure out what a crawler does and recreate it. As I did a number of years ago, you can read Junghoo Cho’s thesis on the subject. In fact, go ahead and build yourself a crawler; it’s not too difficult if you’re a programmer. Just read this book.
However, this is hardly reverse-engineering an algorithm. It’s simply identifying an active component in the data collection and indexing process.
When it comes to the ranking element of the algorithm, there are many factors that can’t be identified because we simply don’t have access to the pertinent information. The number and quality of links that point to your site are a very important factor in the ranking algorithm. To do a back-link check, go to the search box at Google and type in “link:www.yourdomainname.com.” You’ll get a list of links that point to your site. You can do that to find out who links to your competitor’s site as well.
To a search engine, this is known as “in-degree.” But another important factor of in-degree is based on “co-citation,” as it’s known in social network analysis. The algorithm detects two sites that don’t link to each other, but links from many other sites always mention those two close together. Check out these graphics to better understand the concept.
The point is, you can’t get access to this information unless you have access to Google’s entire database.
At the “Future of SEM” session at Search Engine Strategies in Chicago last week, expert search optimizer Greg Boser suggested he could see user behavior data becoming a much more important factor in the ranking algorithm. I couldn’t agree more. In fact, I wrote a column that touched on the subject earlier this year.
And how do we get access to that data to factor into the so-called SEO reverse-engineering process? Only search engines have this data.
We may be able to discover components and clues as to what may be happening to provide a little anecdotal evidence in the odd SEO forum. But this is hardly scientifically stripping an algorithm to the core to recreate it.
More SEO Jargon
What about the “sandbox“? Who’s idea was that? It’s another nonissue that’s reached almost hysteria level. And yet, to those businesses who launch sites based on well-thought-through business models, it doesn’t exist. The sandbox only seems to affect sites that are of no interest to humans or search engines.
You don’t get “sandboxed” for that. You get what you deserve — nothing.
“Proprietary tools.” That’s another phrase that crops up all the time. Every SEO firm has its own proprietary tools. The fact that most SEO shops have the same tools that do the same things rarely seems to be mentioned.
We have a tool that reverse-engineers an algorithm by downloading thousands of Web pages and analyzing the text on them, a firm may say. This means it has a keyword-density analyzer. Pretty useless, but you can get one for yourself for $99.
What about “rank checking”? You can get a tool to do that for $389. Let’s throw in a back-link and anchor-text analyzer. You can pick one up for just $167. And so it goes. For about a grand, you can pretty much have an SEO toolbox.
It’s no wonder I get so much feedback from people telling me how hard it is to cut through the SEO hype pitched to them. Wouldn’t it be nice if SEO clients could hear what they really want to hear, such as, “I’ve analyzed your business model and have a depth of knowledge about you and your market place.”
Optimize, sandbox, reverse-engineer, and all the other industry jargon suggest there’s a technical problem that needs to be solved. But more often, it’s a business or marketing problem that needs to be addressed.
That’s my final rant for this year. I sincerely hope you have a happy and peaceful time over the holiday.
Want more search information? ClickZ SEM Archives contain all our search columns, organized by topic.
How do Facebook’s ads drive search traffic?
Is the solicitation of SMBs by automated robocallers a threat to Google's advertising revenue? How can the search giant protect itself?
Consumer behavior is more predictable around the New Year, when resolutions about self-improvement are especially top-of-mind. But are marketers targeting these opportunities effectively?
Understanding the reciprocative relationship between search and content marketing will help brands effectively target and engage with consumers across multiple digital channels.