2021
DOI: 10.1002/int.22649
|View full text |Cite
|
Sign up to set email alerts
|

DADCNet: Dual attention densely connected network for more accurate real iris region segmentation

Abstract: Most existing performance evaluation standards for iris segmentation algorithms, such as the typical recall, precision, and F-measure (RPF-measure) protocol, are based on a pixel-to-pixel comparison between the mask image obtained after segmentation and the corresponding ground truth (GT) image. However, one of the most

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(15 citation statements)
references
References 40 publications
0
15
0
Order By: Relevance
“…It can be seen from Table 8 that for the dataset MICHE-I, our proposed method achieved the highest Nice1 value of 0.66, which was equal to IrisParseNet [ 15 ]. Our F1 value was 0.02% higher than the previous best method, DADCNet [ 34 ]. Both IrisParseNet [ 15 ] and DADCNet [ 34 ] use the same data augmentation strategy, which expands the training set by a factor of five, whereas our proposed method did not use data augmentation and still alleviated the overfitting problem, validating the superiority of our method.…”
Section: Methodsmentioning
confidence: 58%
See 2 more Smart Citations
“…It can be seen from Table 8 that for the dataset MICHE-I, our proposed method achieved the highest Nice1 value of 0.66, which was equal to IrisParseNet [ 15 ]. Our F1 value was 0.02% higher than the previous best method, DADCNet [ 34 ]. Both IrisParseNet [ 15 ] and DADCNet [ 34 ] use the same data augmentation strategy, which expands the training set by a factor of five, whereas our proposed method did not use data augmentation and still alleviated the overfitting problem, validating the superiority of our method.…”
Section: Methodsmentioning
confidence: 58%
“…To further verify how advanced and encouraging the proposed method was, we compared it with a large number of state-of-the-art methods, which are divided into two main categories, one being non-deep-learning traditional methods [ 7 , 11 , 27 , 32 , 64 , 65 , 66 , 67 , 68 ], and the other being CNN-based deep learning methods [ 15 , 17 , 19 , 20 , 34 , 35 , 69 , 70 , 71 , 72 ]. It can be observed from Table 6 , Table 7 , Table 8 , Table 9 and Table 10 that the deep learning methods outperformed the traditional methods, and our proposed method was the optimal method among the deep learning methods.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In the above formula,   The pixels of the test image block and the answer sheet image are superimposed and calculated, and the result R is the correlation coefficient, whose value range is   0,1 . The higher the value of R is, the more similar the pixels of this part of the answer sheet image are to the pixels of the test image block [9]. After all the four parts are scanned, the maximum R value of each part is marked, so the coordinate positions of the four corners are determined.…”
Section: T X Y I X X Y Y R X Y T X Y I X X Y Ymentioning
confidence: 99%
“…Lu et al 34 proposed a contour-based model that combines the coarse and fine localization results for off-angle and occluded iris segmentation and recognition. DadcNet 35 employed two attention modules and an improved skip connection to segment the real iris region more accurately than the corresponding ground-truth image. Then, iris segmentation networks that adopted self-designed encoderdecoder architectures [14][15][16][17] have obtained remarkable segmentation accuracy on different iris databases.…”
Section: Iris Segmentation and Localizationmentioning
confidence: 99%