Are we back to discussing whether or not the earth is flat? There appears to be significant physical evidence that the web is much larger than search engines would like us to know. What would you say to a web of 500 billion searchable documents versus the reported 1 billion?
BrightPlanet released a white paper estimating that we’ve got more than 100,000 content-rich searchable databases available on the web. The study suggests the existence of a hidden “deep web” with approximately 500 billion individual documents, most of which are available to the public.
This study also indicates that the deep web is a vast pool of Internet content that is 500 times larger than the known “surface” of the World Wide Web. The significance of this is that quality content exists in documents within searchable databases on the web, but conventional search engines can’t access it. Just think what this could mean to businesses, researchers, and consumers – to gain access to valuable, difficult-to-find information on the web with accuracy and ease.
BrightPlanet has developed LexiBot technology, claimed to be the first and only search technology capable of identifying, retrieving, categorizing, and organizing both “deep” and “surface” content from the World Wide Web. LexiBot has the ability to query multiple search sites directly and simultaneously, which allows deep web content to be retrieved.
The deep web differs qualitatively from the surface web in that its sources store content in searchable databases that produce results dynamically in response to a direct request. But direct queries are an arduous way to search because they are handled one at a time. LexiBot automates the process of handling multiple direct queries simultaneously by means of its multiple-thread technology. Traditional search engines create their databases by spidering or crawling “surface” web pages. To be indexed, a page must be static and linked to other pages. Traditional search engines cannot see or retrieve content in the deep web because their technology can’t probe beneath the surface. So while the deep web has always been present, it’s been inaccessible up to now.
Since we are well into the Information Age when usable, relevant data is highly prized, the value of deep web content is incalculable. That’s why researchers of web infrastructure have proposed new search engine models and believe we need a fundamental restructuring of the way search engines work.
The BrightPlanet study found that public information on the deep web can be 400 to 550 times larger than what is known as the World Wide Web. The deep web actually contains 7,500 terabytes of information versus the 29 terabytes of information in the surface web.
Not only that, it is estimated that more than 100,000 nonindexed deep web sites currently exist. Sixty of these collectively contain about 750 terabytes of information, exceeding the size of the surface web 40 times over.
On average, sites in the deep web receive about 50 percent more traffic monthly than surface sites and are more highly linked to; however, the typical deep web site is not well known to the Internet search public. The deep web is believed to be the largest growing source of new information on the Internet.
These sites in the deep web contain quality content – content that is highly relevant to every information need. More than half of the deep web content resides in topic-specific databases.
Ninety-five percent of the deep web contains publicly accessible information that is not subject to fees or subscriptions.
So what does all this mean? It would appear that simultaneous searching of both the surface and deep web is necessary when comprehensive information retrieval is required. And the structure of our currently popular search engines might be in for evolutionary change.
BrightPlanet has automated the identification of deep web sites and the retrieval process for simultaneous searches. It has also developed a direct-access query engine consisting of approximately 22,000 sites, projected to increase to 100,000 sites. A list of these sites can be found in the CompletePlanet search portal.
Online consumers with intent to purchase only find what they’re looking for in 50% of ecommerce searches. That needs to change. eBay ... read more
Update: Google’s Rudy Galfi, Google’s lead product manager for AMP, has revealed to Greg Sterling from Search Engine Land that the global rollout of ... read more
Three years ago, Mark Knowles wrote a thorough checklist for testing a website prior to its live launch. It was a very ... read more
Sridhar Ramaswamy, Google’s SVP of Ads & Commerce made announcements about two new products this morning at DMEXCO 2016. The first centred on ... read more