Performance of modern automated fingerprint recognition systems is heavily influenced by accuracy of their feature extraction algorithm. Nowadays, there are more approaches to fingerprint feature extraction with acceptable results. Problems start to arise in low quality conditions where majority of the traditional methods based on analyzing texture of fingerprint cannot tackle this problem so effectively as artificial neural networks. Many papers have demonstrated uses of neural networks in fingerprint recognition, but there is a little work on using them as Level-2 feature extractors. Our goal was to contribute to this field and develop a novel algorithm employing neural networks as extractors of discriminative Level-2 features commonly used to match fingerprints. In this work, we investigated possibilities of incorporating artificial neural networks into fingerprint recognition process, implemented and documented our own software solution for fingerprint identification based on neural networks whose impact on feature extraction accuracy and overall recognition rate was evaluated. The result of this research is a fully functional software system for fingerprint recognition that consists of fingerprint sensing module using high resolution sensor, image enhancement module responsible for image quality restoration, Level-1 and Level-2 feature extraction module based on neural network, and finally fingerprint matching module using the industry standard BOZORTH3 matching algorithm. For purposes of evaluation we used more fingerprint databases with varying image quality, and the performance of our system was evaluated using FMR/FNMR and ROC indicators. From the obtained results, we may draw conclusions about a very positive impact of neural networks on overall recognition rate, specifically in low quality.
Multimodal biometric systems are nowadays considered as state of the art subject. Since identity establishment in everyday situations has become very significant and rather difficult, there is a need for reliable means of identification. Multimodal systems establish identity based on more than one biometric trait. Hence one of their most significant advantages is the ability to provide greater recognition accuracy and resistance against the forgery. Many papers have proposed various combinations of biometric traits. However, there is an inferior number of solutions demonstrating the use of fingerprint and finger vein patterns. Our main goal was to contribute to this particular field of biometrics.In this paper, we propose OpenFinger, an automated solution for identity recognition utilizing fingerprint and finger vein pattern which is robust to finger displacement as well as rotation. Evaluation and experiments were conducted using SDUMLA-HMT multimodal database. Our solution has been implemented using C++ language and is distributed as a collection of Linux shared libraries.First, fingerprint images are enhanced by means of adaptive filtering where Gabor filter plays the most significant role. On the other hand, finger vein images require the bounding rectangle to be accurately detected in order to focus just on useful biometric pattern. At the extraction stage, Level-2 features are extracted from fingerprints using deep convolutional network using a popular Caffe framework. We employ SIFT and SURF features in case of finger vein patterns. Fingerprint features are matched using closed commercial algorithm developed by Suprema, whereas finger vein features are matched using OpenCV library built-in functions, namely the brute force matcher and the FLANN-based matcher. In case of SIFT features score normalization is conducted by means of double sigmoid, hyperbolic tangens, Z-score and Min-Max functions.On the side of finger veins, the best result was obtained by a combination of SIFT features, brute force matcher with scores normalized by hyperbolic tangens method. In the end, fusion of both biometric traits is done on a score level basis. Fusion was done by means of sum and mean methods achieving 2.12% EER. Complete evaluation is presented in terms of general indicators such as FAR/FRR and ROC.
This paper deals with historical encrypted manuscripts and introduces an automated method for the detection and transcription of ciphertext symbols for subsequent cryptanalysis. Our database contains documents used in the past by aristocratic families living in the territory of Slovakia. They are encrypted using a nomenclator which is a specific type of substitution cipher. In our case, the nomenclator uses digits as ciphertext symbols. We have proposed a method for the detection, classification, and transcription of handwritten digits from the original documents. Our method is based on Mask R-CNN which is a deep convolutional neural network for instance segmentation. Mask R-CNN was trained on a manually collected database of digit annotations. We employ a specific strategy where the input image is first divided into small blocks. The image blocks are then passed to Mask R-CNN to obtain detections. This way we avoid problems related to the detection of a large number of small dense objects in a high-resolution image. Experiments have shown promising detection performance for all digit types with minimum false detections.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.