An Approach to Deep Web Crawling by Sampling

Web Intelligence and Intelligent Agent Technology, 2008. WI-IAT '08. IEEE/WIC/ACM International Conference(2008)

引用 49|浏览2
暂无评分
摘要
Crawling deep web is the process of collecting data from search interfaces by issuing queries. With wide availability of programmable interface encoded in Web services, deep web crawling has received a large variety of applications. One of the major challenges crawling deep web is the selection of the queries so that most of the data can be retrieved at a low cost. We propose a general method in this regard. In order to minimize the duplicates retrieved, we reduced the problem of selecting an optimal set of queries from a sample of the data source into the well-known set-covering problem and adopt a classical algorithm to resolve it. To verify that the queries selected from a sample also produce a good result for the entire data source, we carried out a set of experiments on large corpora including Wikipedia and Reuters. We show that our sampling-based method is effective by empirically proving that 1) The queries selected from samples can harvest most of the data in the original database; 2) The queries with low overlapping rate in samples will also result in a low overlapping rate in the original database; and 3) The size of the sample and the size of the terms from where to select the queries do not need to be very large.
更多
查看译文
关键词
original database,wikipedia,entire data source,reuters,data source,sampling-based method,web services,deep web crawling,user interfaces,low overlapping rate,deep web,web sites,large corpus,web service,search interfaces,large variety,query processing,low cost,databases,computer science,set cover,data mining
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要