Image moments that are invariants to distortions such as translation, scale and rotation are an important tool in pattern recognition. In this paper, derivation of invariants for Tchebichef moments with respect to rotation will be presented. The rotational invariants are achieved neither by tempering with the image nor transforming the coordinates from rectangular to polar. They are derived using moment normalization method, which attempts to map the distorted moments with the undistorted ones. Experimental results show that the derivation is correct and it poses as a viable solution to test whether one image is a rotationally distorted version of another.
This paper presents a novel technique to determine image distortion properties, such as translation, scale, rotation and skew properties, by only using its moments. The properties are retrieved by solving the algebraic relationships between moment functions of original and geometrically distorted images. These properties, to the best of our knowledge, have yet to be presented in any papers. The derived distortion properties are experimentally validated using randomly scaled, rotated and skewed images. Promising results are produced from these experiments.
Introduction: Action recognition is a challenging time series classification task that has received much attention in the recent past due to its importance in critical applications, such as surveillance, visual behavior study, topic discovery, security, and content retrieval. Objectives: The main objective of the research is to develop a robust and high-performance human action recognition techniques. A combination of local and holistic feature extraction methods used through analyzing the most effective features to extract to reach the objective, followed by using simple and high-performance machine learning algorithms. Methods: This paper presents three robust action recognition techniques based on a series of image analysis methods to detect activities in different scenes. The general scheme architecture consists of shot boundary detection, shot frame rate re-sampling, and compact feature vector extraction. This process is achieved by emphasizing variations and extracting strong patterns in feature vectors before classification. Results: The proposed schemes are tested on datasets with cluttered backgrounds, low- or high-resolution videos, different viewpoints, and different camera motion conditions, namely, the Hollywood-2, KTH, UCF11 (YouTube actions), and Weizmann datasets. The proposed schemes resulted in highly accurate video analysis results compared to those of other works based on four widely used datasets. The First, Second, and Third Schemes provides recognition accuracies of 57.8%, 73.6%, and 52.0% on Hollywood2, 94.5%, 97.0%, and 59.3% on KTH, 94.5%, 95.6%, and 94.2% on UCF11, and 98.9%, 97.8% and 100% on Weizmann. Conclusion: Each of the proposed schemes provides high recognition accuracy compared to other state-of-art methods. Especially, the Second Scheme as it gives excellent comparable results to other benchmarked approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.