2020
DOI: 10.1109/tifs.2020.2969552
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Stage Feature Constraints Learning for Age Estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
55
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4
1

Relationship

4
6

Authors

Journals

citations
Cited by 96 publications
(56 citation statements)
references
References 38 publications
1
55
0
Order By: Relevance
“…In addition to the common ResNet, we can also use MobileNetv2 [35] as a lightweight backbone. With a little modification to different backbones, we can extract the features of different levels that we need to prepare for the up-sampling process [36].…”
Section: Channel Feature Compressionmentioning
confidence: 99%
“…In addition to the common ResNet, we can also use MobileNetv2 [35] as a lightweight backbone. With a little modification to different backbones, we can extract the features of different levels that we need to prepare for the up-sampling process [36].…”
Section: Channel Feature Compressionmentioning
confidence: 99%
“…The Deep neural network needs a large number of training data, but it is difficult to obtain these learning samples. Therefore, it is very necessary to use data augmentation to avoid overfitting when there are only a few training samples [36]. Thus, 5000 pictures were generated by scaling, translation, flipping, and rotation.…”
Section: Data Augmentationmentioning
confidence: 99%
“…In recent years, with the improvement of computer hardware and the increasing demand for image processing in practical work, deep learning (DL) has made great progress in the field of security [8], handwritten digit recognition [9], human action recognition [10], financial trading [11], remote image processing [12][13][14][15][16][17], and others [18][19][20][21][22]. According to the study of Kussul et al [23] in processing land cover remote sensing images, deep learning algorithms are significantly better than machine learning algorithms such as the SVM.…”
Section: Introductionmentioning
confidence: 99%