2019 14th IEEE International Conference on Automatic Face &Amp; Gesture Recognition (FG 2019) 2019
DOI: 10.1109/fg.2019.8756541
|View full text |Cite
|
Sign up to set email alerts
|

A Boost in Revealing Subtle Facial Expressions: A Consolidated Eulerian Framework

Abstract: Facial Micro-expression Recognition (MER) distinguishes the underlying emotional states of spontaneous subtle facial expressions. Automatic MER is challenging because that 1) the intensity of subtle facial muscle movement is extremely low and 2) the duration of ME is transient. Recent works adopt motion magnification or time interpolation to resolve these issues. Nevertheless, existing works divide them into two separate modules due to their non-linearity. Though such operation eases the difficulty in implemen… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 40 publications
(18 citation statements)
references
References 38 publications
0
18
0
Order By: Relevance
“…For fair evaluation, we used a protocol called leave-onesubject-out cross validation (LOSO) in the same way as the conventional methods [26][27][28][29][30]. In LOSO, samples for one subject are used as test data, and samples for the remaining subjects are adopted as training data.…”
Section: A Experimental Results On Micro-expression Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…For fair evaluation, we used a protocol called leave-onesubject-out cross validation (LOSO) in the same way as the conventional methods [26][27][28][29][30]. In LOSO, samples for one subject are used as test data, and samples for the remaining subjects are adopted as training data.…”
Section: A Experimental Results On Micro-expression Datasetsmentioning
confidence: 99%
“…They extracted the atomic feature representing the region of interest from the optical flow information, and applied sparse coding, and then classified the result using SVM. Peng et al proposed a consolidated Eulerian frame that integrated independent motion magnification and frame interpolation into a single process [30]. Guo et al proposed the extended local binary patterns on three orthogonal planes (ELBPTOP) as feature descriptors for recognizing FME [45].…”
Section: Related Workmentioning
confidence: 99%
“…Since standard CNNs are limited by their weakness in representing part-global relation, Nguyen et al [34] adopted the newly proposed framework CapsuleNet [30] to recognize micro-expressions. Peng et al [28] explored the underlying joint formulation for Motion MAGnification (MAG) and Time Interpolation Model (TIM) and proposed a consolidated framework for revealing the spatial-temporal information in micro-expression clips. Khor et al [11] presented a Dual-Stream Shallow Network (DSSN) which robustly learns deep micro-expression features by exploiting a pair of shallow CNNs with heterogeneous motion-based inputs.…”
Section: Related Work 21 Micro-expression Recognitionmentioning
confidence: 99%
“…We compare our framework with other related works. These methods are: 1) LBP-TOP [13], LBP-SIP [35], STLBP-IP [8], STCLQP [9], which are LBP based methods 2) HIGO [15], FHOFO [7], Bi-WOOF [20], which are optimal flow based methods, 3) Only-Apex [17], OFF-Apex [5], CNN+LSTM [12], Boost [28], DSSN [11], Shallow [19], Dual [40] Capsule [34], which are deep feature methods, 4) Dynamic [32], which is action unit assisted method. and 5) Neural [21], which is macro-expression assisted method.…”
Section: Comparison With Related Workmentioning
confidence: 99%
“…Video action recognition [1], which is a hot topic of video analysis and understanding, has drawn considerable attention from both academia and industry, since it has great value to many potential applications, like behaviour analysis [2], security, and video affective computing [3]. On one hand, new and large-scale datasets, such as Kinetics [4], Something-Something [5], make great contribution to video action recognition.…”
Section: Introductionmentioning
confidence: 99%