Google's universal search is proving that methods beyond the crawl are required to retrieve relevant information from the Web's emerging new structure. Will the crawler survive?
As I'm English, Independence Day celebrations don't rank too high on my social calendar. However, when I was invited to join a friend and some marketing big-brains for the holiday weekend at the beach, how could I refuse?
The group was a mix of people from conventional and interactive agencies, such as Grey, Leo Burnett, and Beyond Interaction, specifically from the search side. Naturally, most of the conversation (over many cold beers) was marketing related.
It was so refreshing to note how my new friends were fascinated by search and its complexities. But one thing stuck out more than anything else during the conversations: the recurring assumption that Google has access to all the content on the entire Web.
It's a long time since I last wrote about the discoverability of content on the Web. And it's always worth a revisit. To many end users, Google is the Web. Yet, mighty as Google is, it can only return results from the fraction of the Web it has managed to crawl. Of course, there are other methods through which Google can discover content, such as user submitted content via YouTube, Google Base, Google Maps, Google Picasa, and so forth.
But when it comes to the SEO (define) favorite, the search engine crawler, there are strong freshness requirements and multiple timescales. Trying to discover the relevance of existing pages in the index while dealing with the high arrival rate of new Web content isn't an easy task.
The overhead (average number of fetches required to discover one new page) needs to be kept to a minimum. Plus bandwidth is still an issue. It wouldn't be practical to attempt to download the entire Web every day. Politeness rules still exist when it comes to crawling the Web. And some sites may be so large that they simply can't be crawled from beginning to end in the space of a week.
No crawler is ever likely to be able to crawl the entire reachable Web. Almost infinite Web sites, spider traps, spam, and many other issues prevent it.
There will always be a tradeoff between recrawling existing pages and crawling a new page. In a connected world where breaking news is of global concern, search engines must be able to provide that information almost in real time to avoid end-user dissonance.
At the same time, consider the user looking for seemingly less urgent information, such as an operating manual. The user knows it must exist on the Web, yet he can't find it through a search engine. This is also a disappointing experience.
New pages are primarily discovered when a Web site uploads them and links to them from existing indexed pages. Or an entirely new Web site is created and is linked to from an existing indexed Web site.
Of course, this is also where the "filthy linking rich" dilemma that I've written about comes into play. Web sites with more links attract more links than those with fewer. As a result, they have more content indexed, more links, and perhaps preference when it comes to ranking.
And then there's the temporal issue of stale pages. For instance, the most relevant documents for a query about who hit the most home runs in baseball history up until 2007 would have been about Hank Aaron. However, after 2007 the most relevant pages for exactly the same type of query would be about Barry Bonds.
Then there's the case of Google knowing about the existence of Web pages, but not yet having crawled them. Billions of links are extracted from billions of pages by Google, and there must be some order and priority as to which get crawled first.
Even though Google is far better now at dealing with dynamically delivered content and different file types, the invisible Web still exists. Millions of pages are locked in databases or behind password-protected areas that crawlers are blocked from.
Search engine crawlers are certainly much smarter now than the early days of the Web. Yet the effectiveness of graph-generated crawling or even the total effectiveness of the crawling model may never be able to provide timely discovery of Web content in the future.
Google's universal search is proving that methods beyond the crawl are required to retrieve relevant information from the Web's emerging new structure.
User-generated content analysis. Cross-content analysis. Community analysis. Aggregate analysis. All of these must be taken into account to provide the most relevant results and richest end user experience.
So, will the crawler survive?
Join me over at Search Engine Watch's forum to discuss the crawler's possible fate.
Meet Mike at SES San Jose, August 18-22 at San Jose Convention Center.
Join the Industry's Leading eCommerce & Direct Marketing Experts in Chicago
ClickZ Live Chicago (Nov 3-6) will deliver over 50 sessions across 4 days and 10 individual tracks, including Data-Driven Marketing, Social, Mobile, Display, Search and Email. Check out the full agenda and register by Friday, Oct 3 to take advantage of Early Bird Rates!
Mike Grehan is currently CMO & managing director at Acronym where he is responsible for directing thought leadership programs and cross platform marketing initiatives, as well as developing new, innovative content marketing campaigns.
Prior to joining Acronym, Grehan was global VP, Content, at Incisive Media, publisher of Search Engine Watch and ClickZ, and producer of the SES international conference series. Previously, he worked as a search marketing consultant with a number of international agencies handling global clients such as SAP and Motorola. Recognized as a leading search marketing expert, Grehan came online in 1995 and is the author of numerous books and white papers on the subject and is currently in the process of writing his new book “From Search To Social: Marketing To The Connected Consumer” to be published by Wiley later in 2014.
In March 2010 he was elected to SEMPO’s board of directors and after a year as VP he then served two years as president and is now the current chairman.
IBM Social Analytics: The Science Behind Social Media Marketing
80% of internet users say they prefer to connect with brands via Facebook. 65% of social media users say they use it to learn more about brands, products and services. Learn about how to find more about customers' attitudes, preferences and buying habits from what they say on social media channels.
An Introduction to Marketing Attribution: Selecting the Right Model for Search, Display & Social Advertising
If you're considering implementing a marketing attribution model to measure and optimize your programs, this paper is a great introduction. It also includes real-life tips from marketers who have successfully implemented attribution in their organizations.
October 23, 2014
1:00pm ET/10:00am PT