Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Application 2020
DOI: 10.5220/0009099700930102
|View full text |Cite
|
Sign up to set email alerts
|

Configural Representation of Facial Action Units for Spontaneous Facial Expression Recognition in the Wild

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 0 publications
0
7
0
Order By: Relevance
“…But a number were found to either not to predict in real-time, or be based on having an initial neutral image of a subject as a reference point. 64,65…”
Section: Proposed Reduced Feature Set and Feature Extractormentioning
confidence: 99%
See 1 more Smart Citation
“…But a number were found to either not to predict in real-time, or be based on having an initial neutral image of a subject as a reference point. 64,65…”
Section: Proposed Reduced Feature Set and Feature Extractormentioning
confidence: 99%
“…In making the decision to design a simpler set of calculations, research was undertaken into other algorithms that predicted emotion without using transient features. But a number were found to either not to predict in real‐time, or be based on having an initial neutral image of a subject as a reference point 64,65 …”
Section: Proposed Reduced Feature Set and Feature Extractormentioning
confidence: 99%
“…Given a dataset of 2D face images, AUFART is trained by minimizing: matrix , and 2D transition , respectively. We employ the Mediapipe landmark detector to predict landmarks from 2D images, utilizing a total of 105 landmarks distributed across the eyebrows, eyes, nose, and mouth regions [27]. Table 1 provides details on the facial landmarks associated with AUs, and Fig.…”
Section: Loss Functionmentioning
confidence: 99%
“…The AU-based relative distance loss computes the relative distance between AU con gural features for image landmarks and the projected 3D landmarks. The AU con gural features involve calculating relative distances between facial landmark points and are used to determine AUs [27]. For example, AU 4 (Brow Lowerer) is determined based on the distance between the landmark points 21 and 22, which correspond to the inner eyebrow landmarks on the left and right.…”
Section: Loss Functionmentioning
confidence: 99%
“…In making the decision to design a simpler set of calculations, research was undertaken into other algorithms that predicted emotion without using transient features. But a number were found to either not to predict in real-time, or be based on having an initial neutral image of a subject as a reference point [91,92].…”
Section: Proposed Reduced Feature Set and Feature Extractormentioning
confidence: 99%