2017
DOI: 10.1007/978-3-319-69923-3_46
|View full text |Cite
|
Sign up to set email alerts
|

Visible Spectral Iris Segmentation via Deep Convolutional Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 14 publications
0
7
0
Order By: Relevance
“…e segmentation accuracy of the proposed method was 95.49%, which was higher than the accuracy of 47.84% achieved in the previous work, and the time cost of the proposed iris segmentation procedure was only approximately 0.06 s. e results on the challenging CASIA-Iris-ousand database showed that the proposed method is a fast and accurate iris segmentation algorithm. e main advantage of the proposed algorithm over most of the state-of-the-art iris segmentation algorithms based on neural networks such as IrisDenseNet [55] and the model proposed by He et al [56] is that it has a smaller model size which make it faster to segment iris images, which is crucial for a real-time iris recognition system or even implement it on a mobile device. For the future work, we want to further improve the speed of the algorithm by creating heterogeneous models that combining the power of CNN and the speed of traditional computer vision methods.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…e segmentation accuracy of the proposed method was 95.49%, which was higher than the accuracy of 47.84% achieved in the previous work, and the time cost of the proposed iris segmentation procedure was only approximately 0.06 s. e results on the challenging CASIA-Iris-ousand database showed that the proposed method is a fast and accurate iris segmentation algorithm. e main advantage of the proposed algorithm over most of the state-of-the-art iris segmentation algorithms based on neural networks such as IrisDenseNet [55] and the model proposed by He et al [56] is that it has a smaller model size which make it faster to segment iris images, which is crucial for a real-time iris recognition system or even implement it on a mobile device. For the future work, we want to further improve the speed of the algorithm by creating heterogeneous models that combining the power of CNN and the speed of traditional computer vision methods.…”
Section: Resultsmentioning
confidence: 99%
“…ere are many iris segmentation methods based on deeply learned neural networks. In this section, we discuss the difference between two state-of-the-art methods, IrisDenseNet [55] and the model proposed by He et al in [56].…”
Section: Difference Between the Proposed Methods And Other Published Mmentioning
confidence: 99%
See 1 more Smart Citation
“…For these reasons, researchers should strive to improve the speed of the techniques by designing fused models that combine the strength of CNN techniques with the speed of traditional techniques of computer vision. Another way is to try to utilize the semantic segmentation 20 Complexity approach and combine it with the state-of-the-art iris segmentation techniques relying on neural networks like Iris-DenseNet [40] or Faster R-CNN technique as proposed by Girshick [278] or the techniques suggested by He et al [169] and Li et al [177]. e semantic segmentation techniques have a high sensibility of foretelling the reflection pixels in the iris region, which enhances the overall efficiency of the technique.…”
Section: Discussionmentioning
confidence: 99%
“…Despite that, there were limitations, for example, the non-iris pixels similar to the pixels of the iris region can be incorrectly detected as iris pixels [204]. To address that, He et al [169] introduced an iris segmentation method based on a deep CNN to extract the eye features and segment the iris, pupil, and sclera. e structure of CNN is a modified version of the DeepLab model [205].…”
Section: Other Cnn Architecture-based Modelsmentioning
confidence: 99%