In the domain of computer vision, the efficient representation of an image feature vector for the retrieval of images remains a significant problem. Extensive research has been undertaken on Content-Based Image Retrieval (CBIR) using various descriptors, and machine learning algorithms with certain descriptors have significantly improved the performance of these systems. In this proposed research, a new scheme for CBIR was implemented to address the semantic gap issue and to form an efficient feature vector. This technique was based on the histogram formation of query and dataset images. The auto-correlogram of the images was computed w.r.t RGB format, followed by a moment’s extraction. To form efficient feature vectors, Discrete Wavelet Transform (DWT) in a multi-resolution framework was applied. A codebook was formed using a density-based clustering approach known as Density-Based Spatial Clustering of Applications with Noise (DBSCAN). The similarity index was computed using the Euclidean distance between the feature vector of the query image and the dataset images. Different classifiers, like Support Vector (SVM), K-Nearest Neighbor (KNN), and Decision Tree, were used for the classification of images. The set experiment was performed on three publicly available datasets, and the performance of the proposed framework was compared with another state of the proposed frameworks which have had a positive performance in terms of accuracy.
The ability for automated technologies to correctly identify a human’s actions provides considerable scope for systems that make use of human-machine interaction. Thus, automatic3D Human Action Recognition is an area that has seen significant research effort. In work described here, a human’s everyday 3D actions recorded in the NTU RGB+D dataset are identified using a novel structured-tree neural network. The nodes of the tree represent the skeleton joints, with the spine joint being represented by the root. The connection between a child node and its parent is known as the incoming edge while the reciprocal connection is known as the outgoing edge. The uses of tree structure lead to a system that intuitively maps to human movements. The classifier uses the change in displacement of joints and change in the angles between incoming and outgoing edges as features for classification of the actions performed
The ability for automated technologies to correctly identify a human’s actions provides considerable scope for systems that make use of human-machine interaction. Thus, automatic3D Human Action Recognition is an area that has seen significant research effort. In work described here, a human’s everyday 3D actions recorded in the NTU RGB+D dataset are identified using a novel structured-tree neural network. The nodes of the tree represent the skeleton joints, with the spine joint being represented by the root. The connection between a child node and its parent is known as the incoming edge while the reciprocal connection is known as the outgoing edge. The uses of tree structure lead to a system that intuitively maps to human movements. The classifier uses the change in displacement of joints and change in the angles between incoming and outgoing edges as features for classification of the actions performed
In this modern world Digital watermarking is of prime importance. This has increased the demand for copyright protection. Digital watermarking is a solution to the problem of copyright protection and authentication of multimedia data while working in a networked environment. In this paper a non-blind watermarking has been proposed that embed a randomly generated binary numbers as watermark. Human Visual System (HVS) has been used to embed the watermark below the detection threshold (JND). High quality of robustness and imperceptibility are achieved through Just Noticeable Difference (JND) base selection of the detail coefficients. The experimental results show that the proposed technique is more resistant against all types of common signal processing operations and geometric attacks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.