Facial expression has made significant progress in recent years with many commercial systems are available for real-world applications. It gains strong interest to implement a facial expression system on a portable device such as tablet and smart phone device using the camera already integrated in the devices. It is very common to see face recognition phone unlocking app in new smart phones which are proven to be hassle free way to unlock a phone. Implementation a facial expression system in a smart phone would provide fun applications that can be used to measure the mood of the user in their daily life or as a tool for their daily monitoring of the motion in phycology studies. However, traditional facial expression algorithms are normally computing extensive and can only be implemented offline at a computer. In this paper, a novel automatic system has been proposed to recognize emotions from face images on a smart phone in real-time. In our system, the camera of the smart phone is used to capture the face image, BRIEF features are extracted and k-nearest neighbor algorithm is implemented for the classification. The experimental results demonstrate that the proposed facial expression recognition on mobile phone is successful and it gives up to 89.5% recognition accuracy.
Abstract-Automated facial expression recognition (AFER) is a crucial technology to and a challenging task for human computer interaction. Previous methods of AFER have incorporated different features and classification methods and use basic testing approaches. In this paper, we employ the best feature descriptor for AFER by empirically evaluating the feature descriptors named the Facial Landmarks descriptor and the Center of Gravity descriptor. We examine each feature descriptor by considering one classification method, such as the Support Vector Machine (SVM) method, with three unique facial expression recognition (FER) datasets. In addition to test accuracies, we present confusion matrices of AFER. We also analyze the effect of using these feature and image resolutions on AFER performance. Our study indicates that the Facial Landmarks descriptor is the best choice to run AFER on mobile phones. The results of our study demonstrate that the proposed facial expression recognition on a mobile phone application is successful and provides up to 96.3% recognition accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.