Crawling the Infinite Web: Five Levels Are Enough

Ricardo Baeza-Yates, Carlos Castillo

Research output: Contribution to journalArticle

25 Citations (Scopus)

Abstract

A large amount of publicly available Web pages are generated dynamically upon request, and contain links to other dynamically generated pages. This usually produces Web sites which can create arbitrarily many pages. In this article, several probabilistic models for browsing "infinite" Web sites are proposed and studied. We use these models to estimate how deep a crawler must go to download a significant portion of the Web site content that is actually visited. The proposed models are validated against real data on page views in several Web sites, showing that, in both theory and practice, a crawler needs to download just a few levels, no more than 3 to 5 "clicks" away from the start page, to reach 90% of the pages that users actually visit.

Original languageEnglish
Pages (from-to)156-167
Number of pages12
JournalLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume3243
Publication statusPublished - 1 Dec 2004

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint Dive into the research topics of 'Crawling the Infinite Web: Five Levels Are Enough'. Together they form a unique fingerprint.

  • Cite this