2019 14th IEEE International Conference on Automatic Face &Amp; Gesture Recognition (FG 2019) 2019
DOI: 10.1109/fg.2019.8756630
|View full text |Cite
|
Sign up to set email alerts
|

Fully End-to-End Composite Recurrent Convolution Network for Deformable Facial Tracking In The Wild

Abstract: Human facial tracking is an important task in computer vision, which has recently lost pace compared to other facial analysis tasks. The majority of current available tracker possess two major limitations: their little use of temporal information and the widespread use of handcrafted features, without taking full advantage of the large annotated datasets that have recently become available. In this paper we present a fully end-to-end facial tracking model based on current state of the art deep model architectu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
20
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
4
2

Relationship

4
2

Authors

Journals

citations
Cited by 15 publications
(21 citation statements)
references
References 30 publications
1
20
0
Order By: Relevance
“…Since they may contain other redundant parts of the scene which consequently slow down the training process. To do this, we used facial tracking model of [37] and cropped facial area given detected facial landmark.…”
Section: Methodsmentioning
confidence: 99%
“…Since they may contain other redundant parts of the scene which consequently slow down the training process. To do this, we used facial tracking model of [37] and cropped facial area given detected facial landmark.…”
Section: Methodsmentioning
confidence: 99%
“…On the Sentiment Analysis in the Wild dataset (SEWA) [23], we followed original person-independence protocols, and apply the feature extraction techniques described on previous sections. Moreover, we also use the external tracker of [2] to refine the given bounding box. We also includes the experiments on Aff-Wild2 dataset as part of the Affective Behavior Analysis in-the wild (ABAW) 2020 Competition to provide more actual analysis of our models performance.…”
Section: E Model Trainingmentioning
confidence: 99%
“…2) Facial Heatmap Estimator: F HE augments the input features by introducing an additional facial heatmap layer centered around the landmark estimates of l t [6]. To do this, we first obtain the facial landmark points l t using the F LL part of Recurrent Tracker [4], which consist of Inception-Resnet (res) [27] and a regression layer of weight matrix W F LL parameterized by Φ 3 :…”
Section: ) Deep Image Denoiser (Did)mentioning
confidence: 99%