Search engine innovator Brian Pinkerton’s WebCrawler was the Web’s first full-text retrieval search engine. Prior to this development, search engine crawlers simply scanned the first couple of hundred words (often less) from the top of your page and moved on. (WebCrawler is now a metasearch engine.)
At the time, WebCrawler was starved of fresh content. Pinkerton was programming his crawler to pillage Usenet (now GoogleGroups) at a whopping 50 gigs a day. The purpose, of course, was to suck out all the links in the Usenet feed for future crawling. But little or nothing was dynamically delivered then, so crawling for new content while maintaining the freshness of existing index content was pretty straightforward.
Today, that task is much trickier. The Web continues to grow exponentially every day, and, although there’s tons of new content in good old flat HTML, there’s ten times more tucked away in millions and millions of online databases that dynamically generate pages. And accessing, indexing, and maintaining all that information is most certainly not a trivial task.
As Web site development technology continues to develop at a rapid pace, so must search engine crawlers. The question mark in a URL used to be a dreaded token for search engine crawlers. With early crawlers, a major part of the task was to be polite and not put too much pressure on visited Web sites. As was avoiding serious technical issues, such as getting caught in a recursive loop, as is possible when bypassing the question mark in a URL. Once inside a huge database, the crawler could potentially bring a Web site to a standstill or crash itself. And that could have legal ramifications attached to it.
Certainly search engine crawlers do a much better job now of crawling dynamically generated content as well as Flash and other non-HTML file types. But the matter of index freshness is still a challenge.
And that still puts pressure on site owners’ bandwidth, as the only way a crawler can know if a page has changed and update it in the index is by downloading it again. And again, and again, and again.
How much easier would it be if a search engine crawler only visited and downloaded your pages when it absolutely knew the page had changed? The entire crawling process would be streamlined and sped up.
Back then, Pinkerton and I kicked around the idea of using a XML schema to sit on Web servers, maintained by the owners (in the main, ISPs). The idea was simple. The ISP would update the XML file with site changes based on server analytics. The crawler would bring down the XML file first, then only retrieve pages that had been changed since the last crawl. Unfortunately, that practice was wide open to spamming and other forms of manipulation so the idea was canned pretty quickly.
Then last year, Google announced its sitemap initiative. It’s an excellent step forward in programming crawler development. Not only that, it provides a much needed method of being able to submit your site to the engine as opposed to having to wait until you build up linkage data to get crawled and revisited on a regular basis.
And all that’s why I welcomed the new sitemap protocol. At least now there’s a way to create a single feed that you can submit to the three major search engines (Ask isn’t included at this time) to make them aware of your Web pages. In fact, it provides them with a licence to crawl.
Of course, as with the launch of Google Sitemaps last year, there’s a strong emphasis on the fact that knowing your Web pages exist in no way guarantees they’ll get crawled by all the big three. However, avoiding those early technical barriers that prevented many Web sites from being crawled, even if they had linkage data through the roof, is big.
One day all Web sites will be submitted this way. Though at some point, we’ll probably have to pay for it!
Have a burning desire to know everything about crawling the Web? I strongly recommend this thesis. It’s written by Junghoo Cho, a researcher who’s work I’ve studied a great deal. And although it was written in 2001, it still stands up today.
Meet Mike at Search Engine Strategies in Chicago, December 4-7, at the Hilton Chicago.
Want more search information? ClickZ SEM Archives contain all our search columns, organized by topic.
In part one a few weeks ago, we discussed what brand TLDs (top level domains) are, which brands are applying for them and why they might be important. Today, we’ll take an in-depth look at the potential benefits for brands, and explore the challenges brand TLDs could help solve.
In 2017 it is essential that SEO professionals secure the buy-in they need from their business leaders so they can accomplish their professional goals.
Google is giving advertisers new ways to target users on YouTube.
Every year, Google's well-oiled digital ad machine generates tens of billions of dollars in revenue, making the search giant the biggest single recipient of digital ad spend.