2017 12th IEEE International Conference on Automatic Face &Amp; Gesture Recognition (FG 2017) 2017
DOI: 10.1109/fg.2017.107
|View full text |Cite
|
Sign up to set email alerts
|

FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge

Abstract: The field of Automatic Facial Expression Analysis has grown rapidly in recent years. However, despite progress in new approaches as well as benchmarking efforts, most evaluations still focus on either posed expressions, near-frontal recordings, or both. This makes it hard to tell how existing expression recognition approaches perform under conditions where faces appear in a wide range of poses (or camera views), displaying ecologically valid expressions. The main obstacle for assessing this is the availability… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
67
0
2

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 100 publications
(69 citation statements)
references
References 36 publications
0
67
0
2
Order By: Relevance
“…Subsets of these data have been used in FERA 2017 [29] and 3DFAW [21]. The CNN contains three convolutional layers and two fully connected layers (see Figure 1).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Subsets of these data have been used in FERA 2017 [29] and 3DFAW [21]. The CNN contains three convolutional layers and two fully connected layers (see Figure 1).…”
Section: Methodsmentioning
confidence: 99%
“…Automated emotion recognition from facial expression is an active area of research [26, 29]. In clinical contexts, investigators have detected occurrence of depression, autism, conflict, and PTSD from visual features (i.e., face and body expression or movement) [7, 10, 18, 22, 25, 27].…”
Section: Introductionmentioning
confidence: 99%
“…Creating publicly available "in-the-wild" dataset is therefore of importance. Occurrence performance is measured in terms of F1, and intensity in terms of ICC (see [181] for details).…”
Section: Challenges and Opportunitiesmentioning
confidence: 99%
“…Occurrence detection Intensity estimation Amirian et al [8] -0.295 Batista et al [18] 0.506 0.399 He et al [73] 0.507 -Li et al [95] 0.495 -Tang et al [166] 0.574 -Zhou et al [218] -0.445 Baseline [181] 0.452 0.217…”
Section: Teammentioning
confidence: 99%
“…For each sequence, three types of meta data are provided, including 27 manually annotated AUs, automatically tracked head pose (pitch, yaw, and roll), and 83 2D/3D facial landmarks. It is noted that the dataset in the Facial Expression Recognition and Analysis (FERA) 2017 challenge [Valstar et al 2017] was derived from the 3D model of the BP4D database. The dataset comprises of 2,952 videos for training, 1,431 videos for validation and 1,080 videos for test.…”
Section: )mentioning
confidence: 99%