I’m back at my desk in Raleigh, NC, this week. I always look forward to coming to my second home. It gives me an opportunity to catch up with the many talented people I have the pleasure of working with at my company.
As I spend a great deal of time at conferences and events, I aggregate a lot of industry data. Meeting people involved in the same field, whether direct competitors, tool vendors, or industry pundits, provides very interesting and varied perspectives on the SEO (define) side of the business.
So I can have some great exchanges with my colleagues at the “doing” end of the job. One thing that strikes a chord with most people I talk to is the fact obtaining organic (or natural) listings isn’t getting any easier. And much of the “textbook” process of SEO doesn’t seem to cut it anymore.
I’m keen to eliminate time-wasting on anything in the SEO process that doesn’t provide a direct, noticeable contribution to better ranking. So today, I want to reach out to you and get your opinion on what works for you — and what doesn’t.
I keep banging the same old drum about the basic process of getting Web pages crawled and indexed by major search engines, compared to the more complex task of getting a decent rank. But really, if meta tags, H1 tags, alt text attributes, and other components of textbook SEO are so miniscule in the greater scheme of ranking, why do we bother?
Is there really a case where someone was at 856 in the SERPs (define), then added a meta tag that rocketed the site to number 1? Is there a case where someone was at number 10, added an H1 tag, and slipped nicely into the top 5 results?
If there are such cases in commercially competitive sectors, I’d really love to hear about any examples. As far as my own experience goes, links and end-user behavior are the important, upwardly mobile components for decent ranking. So that’s where I’d rather focus my own time and attention.
Very frequently I see search engine results that not only don’t have meta tags or any of the other textbook paraphernalia mentioned above, they often don’t even contain the keywords used in the query!
In fact, I often see top-ranking results from pages that haven’t even been crawled. Search engines are very much aware of the linkage data surrounding hundreds of thousands of pages on the frontier of the crawl and can often rank them even before the crawler gets to them.
Ranking aside, the technical process we sometimes go through creating crawler-friendly pages for search engines in itself is an imposition contributing to a waste of our time and effort. If search engines want all the juicy, free content we have for their end users, why should we also have to do their job for them?
Here’s an interesting question to ponder: if we continue to fix barriers to crawling on behalf of search engines, how will they ever know such problems exist?
If crawlers continuously ran into problems with crawling and indexing, search engines would have to find a way of fixing them to provide relevant results to end users. We wouldn’t have to worry about what are frequently technical minutiae.
Fortunately, crawlers aren’t the primitive pieces of technology they once were. They’re getting smarter and dealing a lot more easily with the different technologies used to develop Web pages. The more intelligent learning machines they become, the less search engines will require the technical-based segment of our industry acting as their unpaid workforce of page tweakers.
Content can also often be misconstrued in SEO as simply large quantities of keyword-scattered pages. Yet content can be just as much a tool or concept as it can be hundreds or thousands of text-heavy pages.
Content is certainly critical to achieving the much needed linkage data to surround pages for ranking purposes. But content can come in many shapes and forms. Search a major search engine for “foreign currency exchange,” “currency converter,” and other foreign-currency-related searches. You’ll always see XE.com in the top results. It’s one page with a banner ad, a tool, and a little bit of instructional text. It ranks for so many popular searches and has done so for years no changes or tweaks required. It’s only a single page. Is that content? I think so!
The page has excellent linkage and, perhaps more important, huge end-user traffic via search. I firmly believe this combination is the reason behind its continued presence and visibility.
Links are good. But you can only get links from other people who have Web sites. What about the millions of end users who don’t have sites? The only way they can show a search engine their approval of results’ relevancy is by voting with clicks.
I now tend to look at links as a peer group review. If your community thinks your site is the greatest piece of work ever and decides to link to it, you have their vote. End users will decide whether the community was right or wrong.
Should we waste our time on textbook SEO techniques as a “just for good measure” effort? Or should we spend more time using creative thinking and promotional efforts to succeed on behalf of our clients? Let me know what you think.
Join us for Search Engine Strategies in Toronto, April 25-26, 2006.
Want more search information? ClickZ SEM Archives contain all our search columns, organized by topic.
In part one a few weeks ago, we discussed what brand TLDs (top level domains) are, which brands are applying for them and why they might be important. Today, we’ll take an in-depth look at the potential benefits for brands, and explore the challenges brand TLDs could help solve.
In 2017 it is essential that SEO professionals secure the buy-in they need from their business leaders so they can accomplish their professional goals.
Google is giving advertisers new ways to target users on YouTube.
Every year, Google's well-oiled digital ad machine generates tens of billions of dollars in revenue, making the search giant the biggest single recipient of digital ad spend.