Deep web

The deep web (or invisible web or hidden web) is the name given to pages on the World Wide Web that are not part of the surface web that is indexed by common search engines. It consists of pages which are not linked to by other pages (e.g., dynamic pages which are returned in response to a submitted query). The deep web also includes sites that require registration or otherwise limit access to their pages (e.g., using the Robots Exclusion Standard), prohibiting search engines from browsing them and creating cached copies. Pages that are only accessible through links produced by JavaScript and Flash also often reside in the deep web since most search engines are unable to properly follow these links.

.

The deep web (or invisible web or hidden web) is the name given to pages on the World Wide Web that are not part of the surface web that is indexed by common search engines. It consists of pages which are not linked to by other pages (e.g., dynamic pages which are returned in response to a submitted query). The deep web also includes sites that require registration or otherwise limit access to their pages (e.g., using the Robots Exclusion Standard), prohibiting search engines from browsing them and creating cached copies. Pages that are only accessible through links produced by JavaScript and Flash also often reside in the deep web since most search engines are unable to properly follow these links.

Non-textual files such as multimedia (image) files, Usenet archives and documents in non-HTML file formats such as PDF and DOC documents used to form a part of the deep web, but most search engines now index many of these resources.

It is estimated that the deep web is several magnitudes larger than the surface web (Bergman, 2001).

The deep web should not be confused with the term dark web or dark internet which refers to machines or network segments not connected to the Internet. While deep web content is accessible to people online but not visible to conventional search engines, dark internet content is not accessible online by either people or search engines

Contents

[hide]

Surface web

To better understand the deep web, consider how conventional search engines construct their databases. Programs called spiders or web crawlers start by reading pages from an initial list of websites. Each page they read is indexed and added to the search engine's database. Any hyperlinks to new pages are added to the list of pages to be indexed. Eventually, all reachable pages have been indexed or the search engine runs out of time or disk space. These reachable pages are the surface web. Pages which do not have a chain of links from a page in the spider's initial list are invisible to that spider and not part of the surface web it defines.

In opposition to the 'surface web' is the 'deep web'. The great majority of the deep web is composed by searchable databases. To understand why these databases are invisible to spiders (and their search engines) consider the following:

Imagine someone has collected a great amount of information – books, texts, articles, images, etc. – and put them together online in a website, creating a database reachable only via a search field. This database, as most databases, would work like this:
  1. in a search field the user types the keywords he or she wants
  2. this searching facility looks inside the database and retrieves the relevant content
  3. a page of results is presented bringing the links to every important topic related to the user’s query

Once a conventional search engine’s web crawler reaches this site, it will capture the text contained in the main page and in the pages which hyperlinks can be found to (usually “about us”, “contact us”, “privacy policy”, etc.). But the great majority of the information – books, texts, articles or images – that are only reachable by querying the search field, cannot be reached by the web crawler. The robot cannot predict which words it should type inside the search field. Thus the data is invisible to the search engine.

Accessing

As said before, search engines use web crawlers that follow hyperlinks. Such crawlers typically do not submit queries to databases due to the potential infinitude of queries that can be made to a single database. It has been noted that this can be (partially) overcome by having links to query results, thus increasing Google-style PageRank results for a member of the deep web.

In 2005, Yahoo! made a small part of the deep web searchable by releasing Yahoo! Subscriptions. This search engine searches through a few subscription-only web sites.

Some search tools, such as Poogee, are being designed to retrieve information from the deep web. Their crawlers are set to identify and somehow interact with searchable databases, aiming to provide access to deep web content.

Crawling the deep web

Researchers have been exploring how the deep web can be crawled in an automatic fashion. Raghavan and Garcia-Molina (2001) presented an architectural model for a hidden-web crawler that used key terms provided by users or collected from the query interfaces to query a web form and crawl the deep web resources. Ntoulas et al. (2005) created a hidden-web crawler that automatically generated meaningful queries to issue against search forms. Their crawler generated promising results, but the problem is far from being solved.

Since a large amount of useful data and information resides in the deep web, search engines have begun exploring alternative methods to crawl the deep web. Google’s Sitemap Protocol and mod_oai are mechanisms that allow search engines and other interested parties to discover deep-web resources on particular web servers. Both mechanisms allow web servers to advertise the URLs that are accessible on them, thereby allowing automatic discovery of resources that are not directly linked to the surface web.

Another way to access the deep web is to crawl it by subject category or vertical. Since traditional engines have difficulty crawling and indexing deep web pages and their content, deep web search engines like Alacra, CloserLookSearch, and NorthernLight create specialty engines by topic to search the deep web. Because these engines are narrow in their data focus, they are built to access specified deep web content by topic. These engines can search dynamic or password protected databases that are otherwise closed to search engines.

Classifying resources

It is difficult to automatically determine if a web resource is a member of the surface web or the deep web. If a resource is indexed by a search engine, it is not necessarily a member of the surface web since the resource could have been found by Google’s Sitemap Protocol, mod_oia, OAIster, etc. If a search engine provides a backlink for a resource, we may assume that the resource is in the surface web. Unfortunately, search engines do not always provide all backlinks to resources. Even if a backlink does exist, there is no way to determine if the resource providing the link is itself in the surface web without crawling all of the Web. Furthermore, a resource may reside in the surface web, but it has not yet been found by a search engine. Therefore, if we have an arbitrary resource, we cannot know for sure if the resource resides in the surface web or deep web without a complete crawl of the Web.

The concept of classifying search results by topic was pioneered by Yahoo! Directory search and is gaining importance as search becomes more relevant in day to day decisions. However, most of the work here has been in categorizing the surface Web by topic. There is little pioneering work done on the Invisible (deep) Web in this area. This classification poses a challenge while searching the deep web whereby two levels of categorization are required. The first level is to categorize sites into vertical topics (health, travel, automobiles etc.) and sub-topics according to the nature of the content underlying their databases. Several deep web directories are under development such as OAlster by the University of Michigan, and DirectSearch by Gary Price to name a few.

The second, more difficult, challenge is to categorize and map the information extracted from multiple deep web sources according to end-user needs. Deep web search reports cannot display URL's like traditional search reports. End users expect their search tools to not only find what they are looking for quickly, but to be intuitive and user-friendly. In order to be meaningful, the search reports have to offer some depth to the nature of content that underlie the sources or else the end-user will be lost in the sea of URLs that do not indicate what content lies underneath them. The format in which search results are to be presented varies widely by the particular topic of the search and the type of content being exposed. The challenge is to find and map similar data elements from multiple disparate sources so that search results may be exposed in a unified format on the search report irrespective of their source.

نظرات 0 + ارسال نظر
برای نمایش آواتار خود در این وبلاگ در سایت Gravatar.com ثبت نام کنید. (راهنما)
ایمیل شما بعد از ثبت نمایش داده نخواهد شد