2022
DOI: 10.1007/s10489-022-03645-7
|View full text |Cite
|
Sign up to set email alerts
|

MMRAN: A novel model for finger vein recognition based on a residual attention mechanism

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
12
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 19 publications
(12 citation statements)
references
References 39 publications
0
12
0
Order By: Relevance
“…Adopting a similar approach, our method achieved a significant increase compared to Boucherit et al (2022) , with an almost 15% increase in accuracy. When data from both sessions are mixed, as in Das et al (2018) , Liu et al (2022) , Ma, Wang & Hu (2023) configuration, our method achieves similar results.…”
Section: Methodsmentioning
confidence: 73%
See 2 more Smart Citations
“…Adopting a similar approach, our method achieved a significant increase compared to Boucherit et al (2022) , with an almost 15% increase in accuracy. When data from both sessions are mixed, as in Das et al (2018) , Liu et al (2022) , Ma, Wang & Hu (2023) configuration, our method achieves similar results.…”
Section: Methodsmentioning
confidence: 73%
“…However, the proposed pipeline not only has a large number of parameters but also includes vigorous preprocessing of the input thereby making the proposed work complex. In Liu et al (2022) , the proposed fusion residual attention block includes a main path that extracts features at multiple scales and a guided attention path corresponding to the main path feature map. Venous features extracted at different learning stages through these two pathways are integrated using a multistage residual attention scheme.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…[31] proposed a local attention Transformer network based on the full view, and liu et al. [32] designed a multiscale and multistage residual attention network. Both attention networks have been used for finger vein recognition with good results.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, Song et al [29] used the deep DenseNet model to improve the recognition performance. Besides, embedding attention mechanisms in the network structure is also a way to improve the network performance [30], for example, Qin et al [31] proposed a local attention Transformer network based on the full view, and liu et al [32] designed a multiscale and multistage residual attention network. Both attention networks have been used for finger vein recognition with good results.…”
Section: Introductionmentioning
confidence: 99%