2022
DOI: 10.1007/s11042-022-12700-x
|View full text |Cite
|
Sign up to set email alerts
|

Comparative study of 1D-local descriptors for ear biometric system

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 40 publications
0
3
0
Order By: Relevance
“…The results demonstrated that employing ear segmentation rather than original images improves the recognition performance. Regouid et al [18] explored the transformation of two-dimensional ear images into one-dimensional representations. Their investigation focused on the one-dimensional local binary patterns (1D-LBP) descriptor and its variations as an alternative solution for feature extraction.…”
Section: Handcrafted Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The results demonstrated that employing ear segmentation rather than original images improves the recognition performance. Regouid et al [18] explored the transformation of two-dimensional ear images into one-dimensional representations. Their investigation focused on the one-dimensional local binary patterns (1D-LBP) descriptor and its variations as an alternative solution for feature extraction.…”
Section: Handcrafted Methodsmentioning
confidence: 99%
“…The comparison reveals satisfactory Rank-1 recognition rates for most of the analyzed papers, such as [18,[20][21][22][23][24][25][26][27][28], achieving rates surpassing 93% with the MAI dataset. Conversely, our approach exhibits the highest and most competitive performance, reaching a Rank-1 recognition rate of 100.00%.…”
Section: Comparisonmentioning
confidence: 97%
“…Raghavendra et al [47] 86.36% --Alshazly et al [48] 70.20% --Chowdhury et al [49] 67.26% --Hassaballah et al [50] 73.71% --Alshazly et al [42] 94.50% 99.40% 98.90% Alshazly et al [43] 97.50% 99.64% 98.41% Omara et al [51] 97.84% --Zhang et al [52] 93.96% --Omara et al [53] 96.82% --Khaldi et al [44] 96.00% 99.00% 94.47% Hassaballah et al [24] 72.29% --Ahila et al [2] 96.99% --Khaldi et al [54] 98.33% --Alshazly et al [45] 99.64% 100% 98.99% Aiadi et al [55] 97.67% --Sharkas [56] 99.45% --Ebanesar et al [57] 98.99% --Kohlakala et al [58] 99.20% --Our method (CFDCNet) 99.70% 100% 99.01% [16] 49.60% --Dodge et al [59] 56.35% 74.80% -Dodge et al [59] 68.50% 83.00% -Zhang et al [30] 50.00% 70.00% -Emersic et al [46] 62.00% 80.35% 95.51% Khaldi et al [44] 50.53% 76.35% 80.97% Hassaballah et al [24] 54.10% --Khaldi et al [60] 48.48% --Khaldi et al [54] 51.25% --Alshazly et al [45] 67.25% 84.00% 96.03% Regouid et al [61] 43.00% --Kacar et al [62] 47.80% 72.10% 95.80% Sajadi et al [25] 53.50% --Omara et al [63] 72 accurately extract the characteristics of ear images through a small number of ear samples and improve the accuracy of human ear recognition.…”
Section: R1 R5 Aucmentioning
confidence: 99%