2014
DOI: 10.1109/tcyb.2013.2249063
|View full text |Cite
|
Sign up to set email alerts
|

A Dynamic Appearance Descriptor Approach to Facial Actions Temporal Modeling

Abstract: Abstract-Both the configuration and the dynamics of facial expressions are crucial for the interpretation of human facial behaviour. Yet to date, the vast majority of reported efforts in the field either do not take the dynamics of facial expressions into account or focus only on prototypic facial expressions of six basic emotions. Facial dynamics can be explicitly analysed by detecting the constituent temporal segments of Facial Action Coding System's (FACS) Action Units (AUs) -onset, apex, and offset. In thi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
74
0
1

Year Published

2014
2014
2021
2021

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 120 publications
(75 citation statements)
references
References 47 publications
0
74
0
1
Order By: Relevance
“…pressures, facial expressions of emotion comprise specific facial movements [4][5][6][7][8] to support a near-optimal system of signaling and decoding [9,10]. Although highly dynamical [11,12], little is known about the form and function of facial expression temporal dynamics. Do facial expressions transmit diagnostic signals simultaneously to optimize categorization of the six classic emotions, or sequentially to support a more complex communication system of successive categorizations over time?…”
mentioning
confidence: 99%
“…pressures, facial expressions of emotion comprise specific facial movements [4][5][6][7][8] to support a near-optimal system of signaling and decoding [9,10]. Although highly dynamical [11,12], little is known about the form and function of facial expression temporal dynamics. Do facial expressions transmit diagnostic signals simultaneously to optimize categorization of the six classic emotions, or sequentially to support a more complex communication system of successive categorizations over time?…”
mentioning
confidence: 99%
“…There are many good features that can be extracted from each video to capture the movement of the fingers. Here LBP-TOP [20] and LPQTOP [6] are selected. These features can not only calculate the distribution of the local information of each frame, but also the distribution of finger movements along to the time.…”
Section: Video Based Micro-gesture Recognitionmentioning
confidence: 99%
“…Methods that do so attempt using either temporal image features [31,32] or DBN-based models such as HMMs [33] and CRFs [34]. In general, these works perform either majority voting using the static detection [25], or detection of the temporal phases of AUs followed by the rulebased classification of the sequences (by detecting the onsetapex-offset sequence of an AU) [33,35]. Other temporal models are based on Ordinal CRFs have been proposed for modeling of AU temporal phases [36], and their intensity [1], however, they do not perform AU detection.…”
Section: Facial Au Detectionmentioning
confidence: 99%