Detecting lies is crucial in many areas, such as airport security, police investigations, counter-terrorism, etc. One technique to detect lies is through the identification of facial micro-expressions, which are brief, involuntary expressions shown on the face of humans when they are trying to conceal or repress emotions. Manual measurement of micro-expressions is hard labor, time consuming, and inaccurate. This paper presents the Design and Development of a Lie Detection System using Facial Micro-Expressions. It is an automated vision system designed and implemented using LabVIEW. An Embedded Vision System (EVS) is used to capture the subject's interview. Then, a LabVIEW program converts the video into series of frames and processes the frames, each at a time, in four consecutive stages. The first two stages deal with color conversion and filtering. The third stage applies geometric-based dynamic templates on each frame to specify key features of the facial structure. The fourth stage extracts the needed measurements in order to detect facial micro-expressions to determine whether the subject is lying or not. Testing results show that this system can be used for interpreting eight facial expressions: happiness, sadness, joy, anger, fear, surprise, disgust, and contempt, and detecting facial micro-expressions. It extracts accurate output that can be employed in other fields of studies such as psychological assessment. The results indicate high precision that allows future development of applications that respond to spontaneous facial expressions in real time.
Chromosome analysis is an essential task in a cytogenetics lab, where cytogeneticists can diagnose whether there are abnormalities or not. Karyotyping is a standard technique in chromosome analysis that classifies metaphase image to 24 chromosome classes. The main two categories of chromosome abnormalities are structural abnormalities that are changing in the structure of chromosomes and numerical abnormalities which include either monosomy (missing one chromosome) or trisomy (extra copy of the chromosome). Manual karyotyping is complex and requires high domain expertise, as it takes an amount of time. With these motivations, in this research, we used deep learning to automate karyotyping to recognize the common numerical abnormalities on a dataset containing 147 non-overlapped metaphase images collected from the Center of Excellence in Genomic Medicine Research at King Abdulaziz University. The metaphase images went through three stages. The first one is individual chromosomes detection using YOLOv2 Convolutional Neural Network followed by some chromosome post-processing. This step achieved 0.84 mean IoU, 0.9923 AP, and 100% individual chromosomes detection accuracy. The second stage is feature extraction and classification where we fine-tune VGG19 network using two different approaches, one by adding extra fully connected layer(s) and another by replacing fully connected layers with the global average pooling layer. The best accuracy obtained is 95.04%. The final step is detecting abnormality and this step obtained 96.67% abnormality detection accuracy. To further validate the proposed classification method, we examined the Biomedical Imaging Laboratory dataset which is publicly available online and achieved 94.11% accuracy.
Touch gesture biometrics authentication system is the study of user's touching behavior on his touch device to identify him. The features traditionally used in touch gesture authentication systems are extracted using hand-crafted feature extraction approach. In this work, we investigate the ability of Deep Learning (DL) to automatically discover useful features of touch gesture and use them to authenticate the user. Four different models are investigated Long-Short Term Memory (LSTM), Gated Recurrent Unit (GRU), Convolutional Neural Network (CNN) combined with LSTM (CNN-LSTM), and CNN combined with GRU (CNN-GRU). In addition, different regularization techniques are investigated such as Activity Regularizer, Batch Normalization (BN), Dropout, and LeakyReLU. These deep networks were trained from scratch and tested using TouchAlytics and BioIdent datasets for dynamic touch authentication. The result reported in terms of authentication accuracy, False Acceptance Rate (FAR), False Rejection Rate (FRR). The best result we have been obtained was 96.73%, 96.07% and 96.08% for training, validation and testing accuracy respectively with dynamic touch authentication system on TouchAlytics dataset with CNN-GRU DL model, while the best result of FAR and FRR obtained on TouchAlytics dataset was with CNN-LSTM were FAR was 0.0009 and FRR was 0.0530. For BioIdent dataset the best results have been obtained was 84.87%, 78.28% and 78.35% for Training, validation and testing accuracy respectively with CNN-LSTM model. The use of a learning based approach in touch authentication system has shown good results comparing with other state-of-the-art using TouchAlytics dataset.
Seismic images are data collected by sending seismic waves to the earth subsurface, recording the reflection and providing subsurface structural information. Seismic attributes are quantities derived from seismic data and provide complementary information. Enhancing seismic images by fusing them with seismic attributes will improve the subsurface visualization and reduce the processing time. In seismic data interpretation, fusion techniques have been used to enhance the resolution and reduce the noise of a single seismic attribute. In this paper, we investigate the enhancement of 3D seismic images using image fusion techniques and neural networks to combine seismic attributes. The paper evaluates the feasibility of using image fusion models pretrained on specific image fusion tasks. These models achieved the best results on their respective tasks and are tested for seismic image fusion. The experiments showed that image fusion techniques are capable of combining up to three seismic attributes without distortion, future studies can increase the number. This is the first study conducted using pretrained models on other types of images for seismic image fusion and the results are promising.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.