The electrocardiogram (ECG) is an emerging biometric modality that has seen about thirteen years of development, in peer-reviewed literature, and as such deserves a systematic review and discussion of the associated methods and findings. In this paper, we review most of the techniques that have been applied to the use of the electrocardiogram for biometric recognition. In particular, we categorize the methodologies based on the features and the classification schemes. Finally, a comparative analysis of the authentication performance of a few of the ECG biometric systems is presented, using our in-house database. The comparative study includes the cases where training and testing data come from the same and different sessions (days). The authentication results show that most of the algorithms that have been proposed for ECG based biometrics perform well, when the training and testing data come from the same session. However, when training and testing data come from different sessions, a performance degradation occurs. Multiple training sessions were incorporated to diminish the loss in performance. That notwithstanding, only a few of the proposed ECG recognition algorithms appear to be able to support performance improvement due to multiple training sessions. Only three of these algorithms produced This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. 2 equal error rates (EERs) in the single digits, including an EER of 5.5% using a method proposed by us.
We use coherently scattered X-rays to measure the molecular composition of an object throughout its volume. We image a planar slice of the object in a single snapshot by illuminating it with a fan beam and placing a coded aperture between the object and the detectors. We characterize the system and demonstrate a resolution of 13 mm in range and 2 mm in cross-range and a fractional momentum transfer resolution of 15%. In addition, we show that this technique allows a 100x speedup compared to previously-studied pencil beam systems using the same components. Finally, by scanning an object through the beam, we image the full 4-dimensional data cube (3 spatial and 1 material dimension) for complete volumetric molecular imaging.
In x-ray coherent scatter tomography, tomographic measurements of the forward scatter distribution are used to infer scatter densities within a volume. A radiopaque 2D pattern placed between the object and the detector array enables the disambiguation between different scatter events. The use of a fan beam source illumination to speed up data acquisition relative to a pencil beam presents computational challenges. To facilitate the use of iterative algorithms based on a penalized Poisson loglikelihood function, efficient computational implementation of the forward and backward models are needed. Our proposed implementation exploits physical symmetries and structural properties of the system and suggests a joint system-algorithm design, where the system design choices are influenced by computational considerations, and in turn lead to reduced reconstruction time. Computational-time speedups of approximately 146 and 32 are achieved in the computation of the forward and backward models, respectively. Results validating the forward model and reconstruction algorithm are presented on simulated analytic and Monte Carlo data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.