Abstract. This paper proposes and evaluates a watermarking-based approach to certify the authenticity of iris images when they are captured by a genuine equipment. In the proposed method, the iris images are secretly signed before being used in biometric processes, and the resulting signature is embedded into the JPEG carrier image in the DCT domain in a data-dependent way. Any alteration of the original (certified) image makes the signature no longer corresponding to this image and this change can be quickly identified at the receiver site. Hence, it is called fragile watermarking to differentiate this method from regular watermarking that should present some robustness against image alterations. There is no need to attach any auxiliary signature data, hence the existing, already standardized transmission channels and storage protocols may be used. The embedding procedure requires to remove some part of the original information. But, by using the BATH dataset comprising 32 000 iris images collected for 1 600 distinct eyes, we verify that the proposed alterations have no impact on iris recognition reliability, although statistically significant, small differences in genuine score distributions are observed when the watermark is embedded to both the enrollment and verification iris images. This is a unique evaluation of how the watermark embedding of digital signatures into the ISO CROPPED iris images (during the enrollment, verification or both) influences the reliability of a well-established, commercial iris recognition methodology. Without loss in generality, this approach is targeted to biometric-enabled ID documents that deploy iris data to authenticate the holder of the document.
A computer vision system is described that captures color image sequences, detects and recognizes static hand poses (i.e., "letters") and interprets pose sequences in terms of gestures (i.e., "words"). The hand object is detected with a double-active contour-based method. A tracking of the hand pose in a short sequence allows detecting "modified poses", like diacritic letters in national alphabets. The static hand pose set corresponds to hand signs of a thumb alphabet. Finally, by tracking hand poses in a longer image sequence, the pose sequence is interpreted in terms of gestures. Dynamic Bayesian models and their inference methods (particle filter and Viterbi search) are applied at this stage, allowing a bi-driven control of the entire system.
The article focuses on the problem of building dense 3D occupancy maps using commercial RGB-D sensors and the SLAM approach. In particular, it addresses the problem of 3D map representations, which must be able both to store millions of points and to offer efficient update mechanisms. The proposed solution consists of two such key elements, visual odometry and surfel-based mapping, but it contains substantial improvements: storing the surfel maps in octree form and utilizing a frustum culling-based method to accelerate the map update step. The performed experiments verify the usefulness and efficiency of the developed system.
The goal of the research reported here was to investigate whether the design methodology utilising embodied agents can be applied to produce a multi-modal human–computer interface for cyberspace events visualisation control. This methodology requires that the designed system structure be defined in terms of cooperating agents having well-defined internal components exhibiting specified behaviours. System activities are defined in terms of finite state machines and behaviours parameterised by transition functions. In the investigated case the multi-modal interface is a component of the Operational Centre which is a part of the National Cybersecurity Platform. Embodied agents have been successfully used in the design of robotic systems. However robots operate in physical environments, while cyberspace events visualisation involves cyberspace, thus the applied design methodology required a different definition of the environment. It had to encompass the physical environment in which the operator acts and the computer screen where the results of those actions are presented. Smart human–computer interaction (HCI) is a time-aware, dynamic process in which two parties communicate via different modalities, e.g., voice, gesture, eye movement. The use of computer vision and machine intelligence techniques are essential when the human is carrying an exhausting and concentration demanding activity. The main role of this interface is to support security analysts and operators controlling visualisation of cyberspace events like incidents or cyber attacks especially when manipulating graphical information. Visualisation control modalities include visual gesture- and voice-based commands.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.