Micro-expressions are rapid and subtle facial movements such that they are difficult to detect and recognize. Most of recent works have attempted to recognize micro-expression by using the spatial and dynamic information from the video clip. Physiological studies have demonstrated that the apex frame can convey the most emotion expressed in facial expression. It may be reasonable to use apex frame for improving micro-expression recognition. However, it is wonder how much apex frames contribute to micro-expression recognition. In this paper, we primarily focus on resolving the contribution-level by using apex frame for micro-expression recognition. Firstly, we propose a new method to detect the apex frame in frequency domain, as it is found that apex frame has very correlated relationship with the amplitude change in frequency domain. Secondly, we propose to use deep convolutional neural network (DCNN) on apex frame to recognize micro-expression. Intensive experimental results on CASME II database shows that our method has achieved considerably improvement compared with the state-of-the-art methods in micro-expression recognition. These results also demonstrate that apex frame can express the major emotion in micro-expression.
Micro-expressions (MEs) are rapid and subtle facial movements that are difficult to detect and recognize. Most recent works have attempted to recognize MEs with spatial and temporal information from video clips. According to psychological studies, the apex frame conveys the most emotional information expressed in facial expressions. However, it is not clear how the single apex frame contributes to micro-expression recognition. To alleviate that problem, this paper firstly proposes a new method to detect the apex frame by estimating pixel-level change rates in the frequency domain. With frequency information, it performs more effectively on apex frame spotting than the currently existing apex frame spotting methods based on the spatio-temporal change information. Secondly, with the apex frame, this paper proposes a joint feature learning architecture coupling local and global information to recognize MEs, because not all regions make the same contribution to ME recognition and some regions do not even contain any emotional information. More specifically, the proposed model involves the local information learned from the facial regions contributing major emotion information, and the global information learned from the whole face. Leveraging the local and global information enables our model to learn discriminative ME representations and suppress the negative influence of unrelated regions to MEs. The proposed method is extensively evaluated using CASME, CASME II, SAMM, SMIC, and composite databases. Experimental results demonstrate that our method with the detected apex frame achieves considerably promising ME recognition performance, compared with the stateof-the-art methods employing the whole ME sequence. Moreover, the results indicate that the apex frame can significantly contribute to micro-expression recognition.
Action Unit (AU) detection plays an important role in facial behaviour analysis. In the literature, AU detection has extensive researches in macro-expressions. However, to the best of our knowledge, there is limited research about AU analysis for micro-expressions. In this paper, we focus on AU detection in micro-expressions. Due to the small quantity and low intensity of micro-expression databases, microexpression AU detection becomes challenging. To alleviate these problems, in this work, we propose a novel micro-expression AU detection method by utilizing self high-order statistics of spatio-wise and channel-wise features which can be considered as spatial and channel attentions, respectively. Through such spatial attention module, we expect to utilize rich relationship information of facial regions to increase the AU detection robustness on limited micro-expression samples. In addition, considering the low intensity of micro-expression AUs, we further propose to explore high-order statistics for better capturing subtle regional changes on face to obtain more discriminative AU features. Intensive experiments show that our proposed approach outperforms the basic framework by 0:0859 on CASME II, 0:0485 on CASME, and 0:0644 on SAMM in terms of the average F1-score.
Micro-expressions (MEs) are involuntary facial movements revealing people's hidden feelings in high-stake situations and have practical importance in various fields. Early methods for Micro-expression Recognition (MER) are mainly based on traditional features. Recently, with the success of Deep Learning (DL) in various tasks, neural networks have received increasing interest in MER. Different from macro-expressions, MEs are spontaneous, subtle, and rapid facial movements, leading to difficult data collection and annotation, thus publicly available datasets are usually small-scale. Currently, various DL approaches have been proposed to solve the ME issues and improve MER performance. In this survey, we provide a comprehensive review of deep MER and define a new taxonomy for the field encompassing all aspects of MER based on DL, including datasets, each step of the deep MER pipeline, and performance comparisons of the most influential methods. The basic approaches and advanced developments are summarized and discussed for each aspect. Additionally, we conclude the remaining challenges and potential directions for the design of robust MER systems. Finally, ethical considerations in MER are discussed. To the best of our knowledge, this is the first survey of deep MER methods, and this survey can serve as a reference point for future MER research.
Survey/review study From Emotion AI to Cognitive AI Guoying Zhao *, Yante Li , and Qianru Xu University of Oulu, Pentti Kaiteran Katu 1, Linnanmaa 90570, Finland * Correspondence: guoying.zhao@oulu.fi Received: 22 September 2022 Accepted: 28 November 2022 Published: 22 December 2022 Abstract: Cognitive computing is recognized as the next era of computing. In order to make hardware and software systems more human-like, emotion artificial intelligence (AI) and cognitive AI which simulate human intelligence are the core of real AI. The current boom of sentiment analysis and affective computing in computer science gives rise to the rapid development of emotion AI. However, the research of cognitive AI has just started in the past few years. In this visionary paper, we briefly review the current development in emotion AI, introduce the concept of cognitive AI, and propose the envisioned future of cognitive AI, which intends to let computers think, reason, and make decisions in similar ways that humans do. The important aspect of cognitive AI in terms of engagement, regulation, decision making, and discovery are further discussed. Finally, we propose important directions for constructing future cognitive AI, including data and knowledge mining, multi-modal AI explainability, hybrid AI, and potential ethical challenges.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.