2014
DOI: 10.1371/journal.pone.0086041
|View full text |Cite
|
Sign up to set email alerts
|

CASME II: An Improved Spontaneous Micro-Expression Database and the Baseline Evaluation

Abstract: A robust automatic micro-expression recognition system would have broad applications in national safety, police interrogation, and clinical diagnosis. Developing such a system requires high quality databases with sufficient training samples which are currently not available. We reviewed the previously developed micro-expression databases and built an improved one (CASME II), with higher temporal resolution (200 fps) and spatial resolution (about 280×340 pixels on facial area). We elicited participants' facial … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
510
0
6

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 708 publications
(555 citation statements)
references
References 23 publications
0
510
0
6
Order By: Relevance
“…They reported that temporal interpolation could help achieve equivalent micro-expression detection performance to a standard 25 fps camera. Later on, Yan et al [49] proposed CASME II, the most comprehensive micro-expression dataset up to date, which comprises a total of 247 videos from 26 subjects, captured at 200fps and coded into 5 class labels. More recently, more approaches have been proposed for micro-expression recognition on this dataset; Liong et al [18,19] introduced features derived from optical strain information, Wang et al [46,47] reinvented the popular LBP-TOP into efficient variants that retain essential information, while Park et al [31] attempted to improve recognition by leveraging on an adaptive motion magnification approach.…”
Section: Micro-expression Recognitionmentioning
confidence: 99%
See 3 more Smart Citations
“…They reported that temporal interpolation could help achieve equivalent micro-expression detection performance to a standard 25 fps camera. Later on, Yan et al [49] proposed CASME II, the most comprehensive micro-expression dataset up to date, which comprises a total of 247 videos from 26 subjects, captured at 200fps and coded into 5 class labels. More recently, more approaches have been proposed for micro-expression recognition on this dataset; Liong et al [18,19] introduced features derived from optical strain information, Wang et al [46,47] reinvented the popular LBP-TOP into efficient variants that retain essential information, while Park et al [31] attempted to improve recognition by leveraging on an adaptive motion magnification approach.…”
Section: Micro-expression Recognitionmentioning
confidence: 99%
“…Thus, to create a spontaneous micro-expression video database is a costly effort. SMIC [16], CASME [51] and CASME II [49] are the most current micro-expression databases to the best of our knowledge. However, the SMIC dataset is much smaller in size (with only 164 samples from 16 subjects) with only 3 valid labels for recognition (positive, negative, and surprised) while the CASME is simply a preliminary subset of the newer CASME II.…”
Section: Datasetmentioning
confidence: 99%
See 2 more Smart Citations
“…Several datasets, focusing on different applications, are available for emotion recognition. For example, DEAP dataset provides EEG and face recordings of participants while they watch musical videos just for the analysis of human affective states [9]; SEMAINE database aims to provide voice and facial information to study the behaviour of subjects interacting with virtual avatars [23]; MAHNOB-HCI database was created for the study of emotions while humans are watching multimedia, supplying several data such as audio, an RGB video and five monochrome videos of the face, EEG, ECG, respiration amplitude, skin temperature and eye-gaze data [21]; or CASMEII dataset which studies facial micro-expressions for security and medical applications, requiring cameras of higher frame rate and spatial resolution [24].…”
Section: Introductionmentioning
confidence: 99%