Back to Search
Start Over
CrawlPart: Creating Crawl Partitions in Parallel Crawlers.
- Source :
- 2013 International Symposium on Computational & Business Intelligence; 2013, p137-142, 6p
- Publication Year :
- 2013
-
Abstract
- With the ever proliferating size and scale of the WWW [1], efficient ways of exploring content are of increasing importance. How can we efficiently retrieve information from it through crawling? And in this "era of tera" and multi-core processors, we ought to think of multi-threaded processes as a serving solution. So, even better how can we improve the crawling performance by using parallel crawlers that work independently? The paper devotes to the fundamental advantages and challenges arising from the design of parallel crawlers [4]. The paper mainly focuses on the aspect of URL distribution among the various parallel crawling processes. How to distribute URLs from the URL frontier to the various concurrently executing crawling process threads is an orthogonal problem. The paper provides a solution to the problem by designing a framework that partitions the URL frontier into a several URL queues by ordering the URLs within each of the distributed set of URLs. [ABSTRACT FROM PUBLISHER]
Details
- Language :
- English
- ISBNs :
- 9780769550664
- Database :
- Complementary Index
- Journal :
- 2013 International Symposium on Computational & Business Intelligence
- Publication Type :
- Conference
- Accession number :
- 94520091
- Full Text :
- https://doi.org/10.1109/ISCBI.2013.36