2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00360
|View full text |Cite
|
Sign up to set email alerts
|

Robust Facial Landmark Detection via Occlusion-Adaptive Deep Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
68
0
1

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 111 publications
(69 citation statements)
references
References 50 publications
0
68
0
1
Order By: Relevance
“…The proposed method is compared to other state-of-the-art landmark localization methods. From these approaches, coordinate regression methods include SDM [35], TSCN [26], IFA [1], CFSS [40], TCDCN [38], TSTN [14], DSRN [17], ODN [39], STA [30], Sun et al's work [27] and GAN [36]. Heatmap regression methods include Newell et al's work [18], SAN [9], LAB [32], CNN-CRF [5], LaplaceKL [23], Sun et al's work [28], DSNT [19] , DARK [37], FHR [30], GHCU [15] [10,11,21] for landmark detection are trained under different conditions with our method so are not included in our comparison.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…The proposed method is compared to other state-of-the-art landmark localization methods. From these approaches, coordinate regression methods include SDM [35], TSCN [26], IFA [1], CFSS [40], TCDCN [38], TSTN [14], DSRN [17], ODN [39], STA [30], Sun et al's work [27] and GAN [36]. Heatmap regression methods include Newell et al's work [18], SAN [9], LAB [32], CNN-CRF [5], LaplaceKL [23], Sun et al's work [28], DSNT [19] , DARK [37], FHR [30], GHCU [15] [10,11,21] for landmark detection are trained under different conditions with our method so are not included in our comparison.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…Results on COFW: The comparison is shown in Table 3. Our method achieves similar performance compared to a recent method ODN (Zhu et al, 2019b). Note that our method is not specifically designed for occluded images (local boundary information may be obscured by occlusions).…”
Section: Results On Aflwmentioning
confidence: 66%
“…Common Challenge Full Inter-Pupil Distance NME (%) ESR (Cao et al, 2014) 5.28 17.00 7.58 SDM (Xiong and De la Torre, 2013) 5.57 15.40 7.52 LBF (Ren et al, 2014) 4.95 11.98 6.32 TCDCN (Zhang et al, 2014b) 4.80 8.60 5.54 CFSS (Zhu et al, 2015) 4.73 9.98 5.76 MDM (Trigeorgis et al, 2016) 4.83 10.14 5.88 Lv et al (2017) 4.36 7.56 4.99 AAN (Yue et al, 2018) 4.38 9.44 5.39 ECT 4.66 7.96 5.31 DSRN (Miao et al, 2018) 4.12 9.68 5. (Kumar and Chellappa, 2018) 3.67 7.62 4.44 JDR (Zhu et al, 2019a) 3.68 7.16 4.36 SAN (Dong et al, 2018a) 3.34 6.60 3.98 Reg + SBR (Dong et al, 2018b) 7.93 15.98 9.46 CPM + SBR (Dong et al, 2018b) 3.28 7.58 4.10 ODN (Zhu et al, 2019b) 3.56 6.67 4.17 ADN Sadiq et al (2019) 3 has inferior performance compared to the HRMs (CPM+SBR). By integrating our Fine-grained Facial Landmark Detection (FFLD) framework, our CRM based model has a comparable performance to state-of-the-art HRMs (Kumar and Chellappa, 2018;Dong et al, 2018b,a;Tai et al, 2019).…”
Section: Methodsmentioning
confidence: 99%
“…The model needed only an image as input and solved the problem of occlusion of large areas by detecting visible facial components while existing face detectors failed to detect faces. Zhu and colleagues developed Occlusion-adaptive Deep Networks (ODNs) in 2019 to solve the occlusion problem [21]. ODNs extract feature maps using the residual blocks, and then use the extracted feature maps as input into two CNN modules: A Geometry-aware Module and a Distillation Module.…”
Section: A Facial Landmark Detectionmentioning
confidence: 99%