Web spam is a technique through which the irrelevant pages get higher rank than relevant pages in the search engine's results. Spam pages are generally insufficient and inappropriate results for user. Many researchers are working in this area to detect the spam pages. However, there is no universal efficient technique developed so far which can detect all spam pages. This paper is an effort in that direction, where we propose a combined approach of content and link-based techniques to identify the spam pages. The content-based approach uses term density and Part of Speech (POS) ratio test and in the link-based approach, we explore the collaborative detection using personalized page ranking to classify the Web page as spam or non-spam. For experimental purpose, WEBSPAM-UK2006 dataset has been used. The results have been compared with some of the existing approaches. A good and promising F-measure of 75.2% demonstrates the applicability and efficiency of our approach.
In several machine vision applications, a fundamental step is to precisely determine the relation between the image of the object and its physical dimension by performing a calibration process. The aim is to devise an enhanced mechanism for camera calibration in order to improve the already existing methods in OpenCV. A good calibration is important when we need to reconstruct a world model or interact with the world as in case of robot, hand-eye coordination. In order to meet the rising demands for higher accuracy various calibration techniques have been developed but they are unable in obtaining precise results. In this paper we propose an enhanced camera calibration procedure using a special grid pattern of concentric circles with special markers. The overall objective is to minimize the re-projection for good camera calibration.
The dynamic Web, which contains huge number of digital documents, is expanding day by day. Thus, it has become a tough challenge to search for a particular document from such a large volume of collections. Text classification is a technique which can speed up the search and retrieval tasks and hence is the need of the hour. Aiming in this direction, this study proposes an efficient technique that uses the concept of connected component (CC) of a graph and Wordnet along with four established feature selection techniques [e.g., TF-IDF, Chi-square, Bi-Normal Separation (BNS) and Information Gain (IG)] to select the best features from a given input dataset in order to prepare an efficient training feature vector. Next, multilayer extreme learning machine (ML-ELM) (which is based on the architecture of deep learning) and other state-of-the-art classifiers are trained on this efficient training feature vector for classification of text data. The experimental work has been carried out on DMOZ and 20-Newsgroups datasets. We have studied the behavior and compared the results of different classifiers using these four important feature selection techniques used for classification process and observed that ML-ELM achieved the maximum overall F-measure of 72.28 % on DMOZ dataset using TF-IDF as the feature selection technique and 81.53 % on 20-Newsgroups dataset using BNS as the feature selection technique compared to other state-of-the-art classifiers which signifies the usefulness of deep learning used by ML-ELM for classifying the text data. Experimental results on these Communicated by A. Di Nola.
Abstract-With the rising quantity of textual data available in electronic format, the need to organize it become a highly challenging task. In the present paper, we explore a document organization framework that exploits an intelligent hierarchical clustering algorithm to generate an index over a set of documents. The framework has been designed to be scalable and accurate even with large corpora. The advantage of the proposed algorithm lies in the need for minimal inputs, with much of the hierarchy attributes being decided in an automated manner using statistical methods. The use of topic modeling in a pre-processing stage ensures robustness to a range of variations in the input data. For experimental work 20-Newsgroups dataset has been used. The Fmeasure of the proposed approach has been compared with the traditional K-Means and K-Medoids clustering algorithms. Test results demonstrate the applicability, efficiency and effectiveness of our proposed approach. After extensive experimentation, we conclude that the framework shows promise for further research and specialized commercial applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.