2018 13th IEEE International Conference on Automatic Face &Amp; Gesture Recognition (FG 2018) 2018
DOI: 10.1109/fg.2018.00104
|View full text |Cite
|
Sign up to set email alerts
|

Facial Micro-Expressions Grand Challenge 2018: Evaluating Spatio-Temporal Features for Classification of Objective Classes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(19 citation statements)
references
References 13 publications
0
19
0
Order By: Relevance
“…To test the robustness of our deep neural network architecture and the ability to learn significant features from samples, we use two cross‐domain protocols: Holdout‐database Evaluation (HDE) and Composite‐database Evaluation (CDE). The HDE and CDE are used in the Micro‐Expression Grand Challenge (MEGC) 2018 [34]. They are tasks A and B in MEGC 2018, respectively, and use CASME II and SAMM dataset together.…”
Section: Resultsmentioning
confidence: 99%
“…To test the robustness of our deep neural network architecture and the ability to learn significant features from samples, we use two cross‐domain protocols: Holdout‐database Evaluation (HDE) and Composite‐database Evaluation (CDE). The HDE and CDE are used in the Micro‐Expression Grand Challenge (MEGC) 2018 [34]. They are tasks A and B in MEGC 2018, respectively, and use CASME II and SAMM dataset together.…”
Section: Resultsmentioning
confidence: 99%
“…Two protocols – Hold-out Database Evaluation (HDE) and Composite Database Evaluation (CDE), were proposed in the challenge, using the CASME II and SAMM databases. The reported performances (Khor et al, 2018; Merghani et al, 2018; Peng et al, 2018) were poorer than most other works that apply only to single databases, indicating that future methods need to be more robust across domains.…”
Section: Challengesmentioning
confidence: 88%
“…Consequently, a minimum threshold of duration for the expressive events has been established. Referring to the literature, intensity peaks too short to be considered facial expressions (<1 s) were excluded from the analysis [76,77]. Given this filtered signal, a set of signal processing features from the AUs activation peaks characterizing the interactions were then extracted.…”
Section: Signal Processingmentioning
confidence: 99%