Micro-expression (ME) analysis has been becoming an attractive topic recently. Nevertheless, the studies of ME mostly focus on the recognition task while spotting task is rarely touched. While micro-expression recognition methods have obtained the promising results by applying deep learning techniques, the performance of the ME spotting task still needs to be largely improved. Most of the approaches still rely upon traditional techniques such as distance measurement between handcrafted features of frames which are not robust enough in detecting ME locations correctly. In this paper, we propose a novel method for ME spotting based on a deep sequence model. Our framework consists of two main steps: 1) From each position of video, we extract a spatial-temporal feature that can discriminate MEs among extrinsic movements. 2) We propose to use a LSTM network that can utilize both local and global correlation of the extracted feature to predict the score of the ME apex frame. The experiments on two publicly databases of ME spotting demonstrate the effectiveness of our proposed method.
Abstract. Micro-expressions are very rapid and involuntary facial expressions, which indicate the suppressed or concealed emotions and can lead to many potential applications. Recently, research in micro-expression spotting obtains increasing attention. By investigating existing methods, we realize that evaluation standards of micro-expression spotting methods are highly desired. To address this issue, we construct a benchmark for fairer and better performance evaluation of micro-expression spotting approaches. Firstly, we propose a sliding window based multi-scale evaluation standard with a series of protocols. Secondly, baseline results of popular features are provided. Finally, we also raise the concerns of taking advantages of machine learning techniques.
Micro-expressions are rapid and involuntary facial expressions, which indicate the suppressed or concealed emotions. Recently, the research on automatic micro-expression (ME) spotting obtains increasing attention. ME spotting is a crucial step prior to further ME analysis tasks. The spotting results can be used as important cues to assist many other human oriented tasks and thus have many potential applications. In this paper, by investigating existing ME spotting methods, we recognize the immediacy of standardizing the performance evaluation of micro-expression spotting methods. To this end, we construct a micro-expression spotting benchmark (MESB). Firstly, we set up a sliding window based multi-scale evaluation framework. Secondly, we introduce a series of protocols. Thirdly, we also provide baseline results of popular methods. The MESB facilitates the research on ME spotting with fairer and more comprehensive evaluation and also enables to leverage the cutting-edge machine learning tools widely.This technical report is extended from an ACIVS17 paper. We are now expanding this work and updating this report when there are substantial achievements. The following citing information may be used for reference:'Thuong-Khanh Tran, Xiaopeng Hong, Guoying Zhao. Sliding-window based micro-expression spotting: A benchmark. In Proc. Advanced Concepts for Intelligent Vision Systems (ACIVS), 2017'.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.