2023
DOI: 10.3389/fnins.2023.1136934
|View full text |Cite
|
Sign up to set email alerts
|

Face-based age estimation using improved Swin Transformer with attention-based convolution

Abstract: Recently Transformer models is new direction in the computer vision field, which is based on self multihead attention mechanism. Compared with the convolutional neural network, this Transformer uses the self-attention mechanism to capture global contextual information and extract more strong features by learning the association relationship between different features, which has achieved good results in many vision tasks. In face-based age estimation, some facial patches that contain rich age-specific informati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 73 publications
0
2
0
Order By: Relevance
“…While the use of transformers in rPPG prediction is hardly explored, there are some relevant works in this area. A temporal difference transformer was introduced [43,44] with the quasi-periodic rPPG features to represent the local spatiotemporal features and evaluated the results on multiple public datasets. Another notable contribution, Efficientphys [45], introduced an end-to-end neural architecture for device-based physiological sensing and conducted a comprehensive comparison with convolutional neural networks.…”
Section: Attention Mechanism and Transformersmentioning
confidence: 99%
See 1 more Smart Citation
“…While the use of transformers in rPPG prediction is hardly explored, there are some relevant works in this area. A temporal difference transformer was introduced [43,44] with the quasi-periodic rPPG features to represent the local spatiotemporal features and evaluated the results on multiple public datasets. Another notable contribution, Efficientphys [45], introduced an end-to-end neural architecture for device-based physiological sensing and conducted a comprehensive comparison with convolutional neural networks.…”
Section: Attention Mechanism and Transformersmentioning
confidence: 99%
“…This highlights the model's consistency. This study draws motivation and inspiration from the rPPGTr paper authored by [43,55].…”
Section: Ablation Studymentioning
confidence: 99%