Micro-expression (ME) is a special type of human expression which can reveal the real emotion that people want to conceal. Spontaneous ME (SME) spotting is to identify the subsequences containing SMEs from a long facial video. The study of SME spotting has a significant importance, but is also very challenging due to the fact that in real-world scenarios, SMEs may occur along with normal facial expressions and other prominent motions such as head movements. In this paper, we improve a state-of-the-art SME spotting method called feature difference analysis (FD) in the following two aspects. First, FD relies on a partitioning of facial area into uniform regions of interest (ROIs) and computing features of a selected sequence. We propose a novel evaluation method by utilizing the Fisher linear discriminant to assign a weight for each ROI, leading to more semantically meaningful ROIs. Second, FD only considers two features (LBP and HOOF) independently. We introduce a state-of-the-art MDMO feature into FD and propose a simple yet efficient collaborative strategy to work with two complementary features, i.e., LBP characterizing texture information and MDMO characterizing motion information. We call our improved FD method collaborative feature difference (CFD). Experimental results on two well-established SME datasets SMIC-E and CASME II show that CFD significantly improves the performance of the original FD.
Image retargeting techniques adjust images into different sizes and have attracted much attention recently. Objective quality assessment (OQA) of image retargeting results is often desired to automatically select the best results. Existing OQA methods train a model using some benchmarks (e.g., RetargetMe), in which subjective scores evaluated by users are provided. Observing that it is challenging even for human subjects to give consistent scores for retargeting results of different source images (diff-source-results), in this paper we propose a learning-based OQA method that trains a General Regression Neural Network (GRNN) model based on relative scores-which preserve the ranking-of retargeting results of the same source image (same-source-results). In particular, we develop a novel training scheme with provable convergence that learns a common base scalar for same-source-results. With this source specific offset, our computed scores not only preserve the ranking of subjective scores for same-source-results, but also provide a reference to compare the diff-source-results. We train and evaluate our GRNN model using human preference data collected in RetargetMe. We further introduce a subjective benchmark to evaluate the generalizability of different OQA methods. Experimental results demonstrate that our method outperforms ten representative OQA methods in ranking prediction and has better generalizability to different datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.