2013
DOI: 10.1016/j.imavis.2013.02.001
|View full text |Cite|
|
Sign up to set email alerts
|

Hierarchical On-line Appearance-Based Tracking for 3D head pose, eyebrows, lips, eyelids and irises

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
4
3
3

Relationship

5
5

Authors

Journals

citations
Cited by 39 publications
(22 citation statements)
references
References 23 publications
0
22
0
Order By: Relevance
“…• Facial point tracking, image registration and Region of Interest (ROI) extraction • Appearance and shape features computation • Classification First, facial characteristic landmarks on the speaker's face are tracked throughout each video of an utterance, using the Appearance-Based Tracker [35]. Only the points corresponding to the lower face region are used in further processing.…”
Section: Overview Of the Proposed Methodsmentioning
confidence: 99%
“…• Facial point tracking, image registration and Region of Interest (ROI) extraction • Appearance and shape features computation • Classification First, facial characteristic landmarks on the speaker's face are tracked throughout each video of an utterance, using the Appearance-Based Tracker [35]. Only the points corresponding to the lower face region are used in further processing.…”
Section: Overview Of the Proposed Methodsmentioning
confidence: 99%
“…We use a portion of the database running approximately 85 minutes, which has been annotated for the emotion dimensions at hand by 5 raters, from which we use the averaged annotation 2 . For extracting facial expression features, we employ an Active Appearance Model (AAM) based tracker [9], designed for simultaneous tracking of 3D head pose, lips, eyebrows, eyelids and irises in video sequences. For each video frame, we obtain 113 2D-points, resulting in an 226 dimensional feature vector.…”
Section: Data and Feature Extractionmentioning
confidence: 99%
“…We consider face and head movements tracked by the state-of-the-arttrackers (i.e. the face tracker described in [25] and the head pose estimator described in [17]) and report on binary classification of video sequences into mimicry and non-mimicry categories based on the following widely-used methodology: two similarity-based methods (cross correlation as used in [22] and Generalised Time Warping [40]), and the state-of-the-art temporal classifier, Long Short Term Memory Recurrent Neural Network (LSTM-RNN) [32]. Performance of the methods is evaluated against the ground truth, representing human annotations of motor mimicry behaviour.…”
Section: A C C E P T E D Mmentioning
confidence: 99%