2019 14th IEEE International Conference on Automatic Face &Amp; Gesture Recognition (FG 2019) 2019
DOI: 10.1109/fg.2019.8756557
|View full text |Cite
|
Sign up to set email alerts
|

Heatmap-Guided Balanced Deep Convolution Networks for Family Classification in the Wild

Abstract: Automatic kinship recognition using Computer Vision, which aims to infer the blood relationship between individuals by only comparing their facial features, has started to gain attention recently. The introduction of large kinship datasets, such as Family In The Wild (FIW), has allowed large scale dataset modeling using state of the art deep learning models. Among other kinship recognition tasks, family classification task is lacking any significant progress due to its increasing difficulty in relation to the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2

Relationship

4
2

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 31 publications
0
7
0
Order By: Relevance
“…The pipeline of our base model ANCLaF starts with the G network. It receives either the original input image I, or a distorted version of it, Ĩ, as detailed in (Aspandi et al, 2019c;Aspandi et al, 2019a). It simultaneously produces the cleaned reconstruction of the input image Î and a 2D latent representation that will be used as features (Z):…”
Section: Adversarial Network Withmentioning
confidence: 99%
See 1 more Smart Citation
“…The pipeline of our base model ANCLaF starts with the G network. It receives either the original input image I, or a distorted version of it, Ĩ, as detailed in (Aspandi et al, 2019c;Aspandi et al, 2019a). It simultaneously produces the cleaned reconstruction of the input image Î and a 2D latent representation that will be used as features (Z):…”
Section: Adversarial Network Withmentioning
confidence: 99%
“…where f i is the total number of instances of discrete V-A classes i, and F is a normalisation factor (Aspandi et al, 2019a) for the total V-A classes (discretised by a value of 10). This normalisation factor is crucial in cases of large imbalance in the number of instances per class, like in the AFEW-VA dataset (see Section 4.1).…”
Section: Training Lossesmentioning
confidence: 99%
“…where n i is the total number of instances of discrete valence/arousal class i, and N is the normalisation factor [1] for the total valence/arousal class. This normalisation factor is crucial given considerably unbalanced class instance on the Aff-Wild2 dataset [17].…”
Section: B Conditional Discriminator-based Affect Estimatormentioning
confidence: 99%
“…where n i is the total number of instances of discrete valence/arousal class i, and N is the normalisation factor [1] for the total valence/arousal class. This normalisation factor is crucial given considerably unbalanced class instance on the Aff-Wild2 Challenge [17].…”
Section: B Conditional Discriminator-based Affect Estimatormentioning
confidence: 99%
“…Our models were trained using an NVIDIA Titan X GPU and it took approximately two days to converge. The source code of our models is available at our github page 1…”
Section: E Model Trainingmentioning
confidence: 99%