Human convey their message in different forms. Expressing their emotions and moods through their facial expression is one of them. In this work, to avoid the traditional process of feature extraction (Geometry based method, Template based method, and Appearance based method), CNN model is used as a feature extractor for emotion detection using facial expression. In this study we also used three pre-trained models VGG-16, ResNet-50, Inception-V3. This Experiment is done on Fer-2013 facial expression dataset and Cohn Extended (CK+) dataset. By using FER-2013 dataset the accuracy rates for CNN, ResNet-50, VGG-16 and Inception-V3 are 76.74%, 85.71%, 85.78%s, 97.93% respectively. Similarly, the experimental results using CK+ dataset showed the accuracy rates for CNN, ResNet- 50, VGG-16 and Inception-V3 are 84.18%, 92.91%, 91.07%, and 73.16% respectively. The experimental results showed exceptional results for Inception-V3 with 97.93% using FER-2013 dataset and ResNet-50 with 91.92% using CK+ dataset.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.