Abstract:Detecting emotion from facial expression has become an urgent need because of its immense applications in artificial intelligence such as human-computer collaboration, data-driven animation, human-robot communication etc. Since it is a demanding and interesting problem in computer vision, several works had been conducted regarding this topic. The objective of this research is to develop a facial expression recognition system based on convolutional neural network with data augmentation. This approach enables to… Show more
“…Most studies that perform facial expression recognition directly train on facial images ( Barros et al, 2015 ; Ahmed et al, 2019 ). In contrast, we train the Convolutional Neural Network (CNN) on simplified versions of the training images generated from landmarks detected via dlib ( King, 2009 ).…”
The ability of a robot to generate appropriate facial expressions is a key aspect of perceived sociability in human-robot interaction. Yet many existing approaches rely on the use of a set of fixed, preprogrammed joint configurations for expression generation. Automating this process provides potential advantages to scale better to different robot types and various expressions. To this end, we introduce ExGenNet, a novel deep generative approach for facial expressions on humanoid robots. ExGenNets connect a generator network to reconstruct simplified facial images from robot joint configurations with a classifier network for state-of-the-art facial expression recognition. The robots’ joint configurations are optimized for various expressions by backpropagating the loss between the predicted expression and intended expression through the classification network and the generator network. To improve the transfer between human training images and images of different robots, we propose to use extracted features in the classifier as well as in the generator network. Unlike most studies on facial expression generation, ExGenNets can produce multiple configurations for each facial expression and be transferred between robots. Experimental evaluations on two robots with highly human-like faces, Alfie (Furhat Robot) and the android robot Elenoide, show that ExGenNet can successfully generate sets of joint configurations for predefined facial expressions on both robots. This ability of ExGenNet to generate realistic facial expressions was further validated in a pilot study where the majority of human subjects could accurately recognize most of the generated facial expressions on both the robots.
“…Most studies that perform facial expression recognition directly train on facial images ( Barros et al, 2015 ; Ahmed et al, 2019 ). In contrast, we train the Convolutional Neural Network (CNN) on simplified versions of the training images generated from landmarks detected via dlib ( King, 2009 ).…”
The ability of a robot to generate appropriate facial expressions is a key aspect of perceived sociability in human-robot interaction. Yet many existing approaches rely on the use of a set of fixed, preprogrammed joint configurations for expression generation. Automating this process provides potential advantages to scale better to different robot types and various expressions. To this end, we introduce ExGenNet, a novel deep generative approach for facial expressions on humanoid robots. ExGenNets connect a generator network to reconstruct simplified facial images from robot joint configurations with a classifier network for state-of-the-art facial expression recognition. The robots’ joint configurations are optimized for various expressions by backpropagating the loss between the predicted expression and intended expression through the classification network and the generator network. To improve the transfer between human training images and images of different robots, we propose to use extracted features in the classifier as well as in the generator network. Unlike most studies on facial expression generation, ExGenNets can produce multiple configurations for each facial expression and be transferred between robots. Experimental evaluations on two robots with highly human-like faces, Alfie (Furhat Robot) and the android robot Elenoide, show that ExGenNet can successfully generate sets of joint configurations for predefined facial expressions on both robots. This ability of ExGenNet to generate realistic facial expressions was further validated in a pilot study where the majority of human subjects could accurately recognize most of the generated facial expressions on both the robots.
“…We take 6079 images for training, approximately 1500 images for each class (happy, sad, angry, and surprise), for validation 436 images and 422 for testing our model. Many researchers used [27,28] combined datasets for their work. Creating a dataset by collecting images from different sources makes our model effective and unbiased.…”
Section: Collection Of Facial Expression Database and Preprocessingmentioning
confidence: 99%
“…By merging three datasets (JAFFE, KDEF, and custom data) they got training accuracy of 96.43% and validation accuracy of 91.81% in real-time-based facial emotions classification work. Other researcher Ahmed et al [28] merged eight different datasets and applied augmentation techniques in their proposed a CNN structure and achieved 96.24% accuracy.…”
Many scientific works have been conducted for developing the Emotion intensity recognition system. But developing a system that is capable to estimate small to peak intensity levels with less complexity is still challenging. Therefore, we propose an effective facial emotion intensity classifier by fusion of the pre-trained deep architecture and fuzzy inference system. The pre-trained architecture VGG16 is used for basic emotion classification and it predicts emotion class with the class index value. By class index value, images are sent to the corresponding Fuzzy inference system for estimating the intensity level of detected emotion. This fusion model effectively identifies the facial emotions (happy, sad, surprise, and angry) and also predict the 13 categories of emotion intensity. This fusion model got 83% accuracy on a combined dataset (FER 2013, CK + and KDEF). The performance and findings of this proposed work are further compared with state-of-the-art models.
“…For their foundation, these studies rely on contributions from robotics (e.g., [1,11,156]) and HRI [121,122,169,225]. Further studies are rooted in human-computer interaction (e.g., [3,4,21,99,140,173], engineering [171], and philosophy [101].…”
Knowledge production within the interdisciplinary field of human–robot interaction (HRI) with social robots has accelerated, despite the continued fragmentation of the research domain. Together, these features make it hard to remain at the forefront of research or assess the collective evidence pertaining to specific areas, such as the role of emotions in HRI. This systematic review of state-of-the-art research into humans’ recognition and responses to artificial emotions of social robots during HRI encompasses the years 2000–2020. In accordance with a stimulus–organism–response framework, the review advances robotic psychology by revealing current knowledge about (1) the generation of artificial robotic emotions (stimulus), (2) human recognition of robotic artificial emotions (organism), and (3) human responses to robotic emotions (response), as well as (4) other contingencies that affect emotions as moderators.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.