We focus on gesture recognition based on 3D information in the form of a point cloud of the observed scene. A descriptor of the scene is built on the basis of a Viewpoint Feature Histogram (VFH). To increase the distinctiveness of the descriptor the scene is divided into smaller 3D cells and VFH is calculated for each of them. A verification of the method on publicly available Polish and American sign language datasets containing dynamic gestures as well as hand postures acquired by a time-of-flight (ToF) camera or Kinect is presented. Results of cross-validation test are given. Hand postures are recognized using a nearest neighbour classifier with city-block distance. For dynamic gestures two types of classifiers are applied: (i) the nearest neighbour technique with dynamic time warping and (ii) hidden Markov models. The results confirm the usefulness of our approach.
The paper presents a method for recognizing sequences of static letters of the Polish finger alphabet using the point cloud descriptors: viewpoint feature histogram, eigenvalues-based descriptors, ensemble of shape functions, and global radius-based surface descriptor. Each sequence is understood as quick highly coarticulated motions, and the classification is performed by networks of hidden Markov models trained by transitions between postures corresponding to particular letters. Three kinds of the left-to-right Markov models of the transitions, two networks of the transition models—independent and dependent on a dictionary—as well as various combinations of point cloud descriptors are examined on a publicly available dataset of 4200 executions (registered as depth map sequences) prepared by the authors. The hand shape representation proposed in our method can also be applied for recognition of hand postures in single frames. We confirmed this using a known, challenging American finger alphabet dataset with about 60,000 depth images.
Gestures are natural means of co mmun ication between humans, and therefore their application would benefit to many fields where usage of typical input devices, such as keyboards or joysticks is cu mbersome or unpractical (e.g., in noisy environ ment). Recently, together with emergence of new cameras that allow obtaining not only colour images of observed scene, but also offer the software developer rich informat ion on the number of seen hu mans and, what is most interesting, 3D positions of their body parts, practical applications using body gestures have become more popular. Such informat ion is presented in a form o f skeletal data. In this paper, an approach to gesture recognition bas ed on skeletal data using nearest neighbour classifier with dynamic time warping is presented. Since similar approaches are widely used in the literature, a few practical improvements that led to better recognition results are proposed. The approach is extensively evaluated on three publicly availab le gesture datasets and compared with state-of-the-art classifiers. For some gesture datasets, the proposed approach outperformed its competitors in terms of recognition rate and time of recognition.
Purpose
This paper aims to present a vision-based method for determination of the position of a fixed-wing aircraft that is approaching a runway.
Design methodology/approach
The method determines the location of an aircraft based on positions of precision approach path indicator lights and approach light system with sequenced flashing lights in the image captured by an on-board camera.
Findings
As the relation of the lighting systems to the touchdown area on the considered runway is known in advance, the detected lights, seen as glowing lines or highlighted areas, in the image can be mapped onto the real-world coordinates and then used to estimate the position of the aircraft. Furthermore, the colours of lights are detected and can be used as auxiliary information.
Practical implications
The presented method can be considered as a potential source of flight data for autonomous approach and for augmentation of manual approach.
Originality/value
In this paper, a feasibility study of this concept is presented and primarily validated.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.