The traditional process of focused web crawler is to harvest a collection of web documents that are focused on the topical subspaces. The intricacy of focused crawlers is identifying the next most important and relevant link to follow. Focused Crawlers mostly rely on probabilistic models for predicting the relevancy of the documents. The Web documents are well characterized by the hypertext and the hypertext can be used to determine the relevance of the document to the search domain. The semantics of the link characterizes the semantics of the document referred. In this article, a novel, and distinctive focused crawler named LSCrawler has been proposed. This LSCrawler system retrieves documents by speculating the relevancy of the document based on the keywords in the link and the surrounding text of the link. The relevancy of the documents is reckoned measuring the semantic similarity between the keywords in the link and the taxonomy hierarchy of the specific domain. The system exhibits better recall as it exploits the semantic of the keywords in the link.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.