One of the most convenient biometrics approaches for identifying a person is dorsal hand veins recognition. In recent years, the dorsal hand veins have acquired increasing attention because of its characteristics such as universal, unique, permanent, contactless, and difficulty of forging, also, the veins remain unchanged when a human being grows. The captured dorsal hand veins image suffers from the many differences in lighting conditions, brightness, existing hair, and amount of noise. To solve these problems, this paper aims to extract and recognize dorsal hand veins based on the largest correlation coefficient. The proposed system consists of three stages: 1) preprocessing the image, 2) feature extraction, and 3) matching. In order to evaluate the proposed system performance, two databases have been employed. The test results illustrate the correct recognition rate (CRR), and accuracy of the first database are 99.38% and 99.46%, respectively, whereas the CRR, and accuracy of the second database are 99.11% and 99.07% respectively. As a result, we conclude that our proposed method for recognizing dorsal hand veins is feasible and effective.
One study whose importance has significantly grown in recent years is lip-reading, particularly with the widespread of using deep learning techniques. Lip reading is essential for speech recognition in noisy environments or for those with hearing impairments. It refers to recognizing spoken sentences using visual information acquired from lip movements. Also, the lip area, especially for males, suffers from several problems, such as the mouth area containing the mustache and beard, which may cover the lip area. This paper proposes an automatic lip-reading system to recognize and classify short English sentences spoken by speakers using deep learning networks. The input video extracts frames and each frame is passed to the Viola-Jones to detect the face area. Then 68 landmarks of the facial area are determined, and the landmarks from 48 to 68 represent the lip area extracted based on building a binary mask. Then, the contrast is enhanced to improve the quality of the lip image by applying contrast adjustment. Finally, sentences are classified using two deep learning models, the first is AlexNet, and the second is VGG-16 Net. The database consists of 39 participants (32 males and 7 females). Each participant repeats the short sentences five times. The outcomes demonstrate the accuracy rate of AlexNet is 90.00%, whereas the accuracy rate for VGG-16 Net is 82.34%. We concluded that AlexNet performs better for classifying short sentences than VGG-16 Net.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.