We have recently seen significant advancements in the development of robotic machines that are designed to assist people with their daily lives. Socially assistive robots are now able to perform a number of tasks autonomously and without human supervision. However, if these robots are to be accepted by human users, there is a need to focus on the form of human-robot interaction that is seen as acceptable by such users. In this paper, we extend our previous work, originally presented in Ruiz-Garcia et al. [1], to provide emotion recognition from human facial expressions for application on a real-time robot. We expand on previous work by presenting a new hybrid deep learning emotion recognition model and preliminary results using this model on real-time emotion recognition performed by our humanoid robot. The hybrid emotion recognition model combines a Deep Convolutional Neural Network (CNN) for self-learnt feature extraction and a Support Vector Machine (SVM) for emotion classification. Compared to more complex approaches that use more layers in the convolutional model, this hybrid deep learning model produces state-of-theart classification rate of 96.26%, when tested on the Karolinska Directed Emotional Faces dataset [2], and offers similar performance on unseen data when tested on the Extended Cohn-Kanade dataset [3]. This architecture also takes advantage of Batch Normalization [4] for fast learning from a smaller number of training samples. A
Emotion recognition is critical for everyday living and is essential for meaningful interaction. If we are to progress towards human and machine interaction that is engaging the human user, the machine should be able to recognise the emotional state of the user. Deep Convolutional Neural Networks (CNN) have proven to be efficient in emotion recognition problems. The good degree of performance achieved by these classifiers can be attributed to their ability to self-learn a down-sampled feature vector that retains spatial information through filter kernels in Convolutional layers. Given the view that randomized initialization of weights can lead to convergence in non-optimal local minima, in this paper we explore the impact of training the initial weights in an unsupervised manner. We study the effect of pre-training a Deep CNN as a Stacked Convolutional Auto-Encoder (SCAE) in a greedy layer-wise unsupervised fashion for emotion recognition using facial expression images. When trained with randomly initialized weights, our CNN emotion recognition model achieves a performance rate of 91.16% on the Karolinska Directed Emotional Faces (KDEF) dataset. In contrast, when each layer of the model, including the hidden layer, is pre-trained as an Auto-Encoder, the performance increases to 92.52%. Pre-training our CNN as a SCAE also reduces training time marginally. The emotion recognition model developed in this work will form the basis of real-time empathic robot system.
Fingerprint alteration is a challenge that poses enormous security risks. As a result, many research efforts in the scientific community have attempted to address the issue. However, non-existence of publicly available datasets that contain obfuscation and distortion of fingerprints makes it difficult to identify the type of alteration and thus the study and development of mechanism to correct the alteration and correctly identify individuals. In this work we present the publicly available Coventry Fingerprints Dataset (CovFingDataset) with unique attributes such as: ten fingerprints for 611 different subjects, gender, hand and finger name for each image, among others. We also provide a total of 55,249 images with three levels of alteration for z-cut, obliteration and central rotation synthetic alterations, which are the most common types of obfuscation and distortion. Moreover, we propose a Convolutional Neural Network (CNN) to identify this type of alterations. The proposed CNN model achieves a classification accuracy rate of 98.55%. Results are also compared with a residual CNN model pre-trained on ImageNet which produces an accuracy of 99.88%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.