2020
DOI: 10.3390/sym12010078
|View full text |Cite
|
Sign up to set email alerts
|

PointNet++ and Three Layers of Features Fusion for Occlusion Three-Dimensional Ear Recognition Based on One Sample per Person

Abstract: The ear’s relatively stable structure makes it suitable for recognition. In common identification applications, only one sample per person (OSPP) is registered in a gallery; consequently, effectively training deep-learning-based ear recognition approach is difficult. The state-of-the-art (SOA) 3D ear recognition using the OSPP approach bottlenecks when large occluding objects are close to the ear. Hence, we propose a system that combines PointNet++ and three layers of features that are capable of extracting ri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 24 publications
0
11
0
Order By: Relevance
“…Third, the ICP method [23] must search the entire surface of the gallery ear image for the point closest to the point from the probe sample. Furthermore, feature fusion method [31][32][33]38], surface variation feature extraction method [21], and index method [40] must search the entire surface of the gallery ear image for the match of the query feature. Consequently, these methods require more time to search the matching feature than the proposed system.…”
Section: Comparison With Existing Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Third, the ICP method [23] must search the entire surface of the gallery ear image for the point closest to the point from the probe sample. Furthermore, feature fusion method [31][32][33]38], surface variation feature extraction method [21], and index method [40] must search the entire surface of the gallery ear image for the match of the query feature. Consequently, these methods require more time to search the matching feature than the proposed system.…”
Section: Comparison With Existing Methodsmentioning
confidence: 99%
“…However, when these indicators are used to evaluate the LC_KSVD method [24], which is difficult to run on the OSPP gallery, it is difficult to use the running time of the LC_KSVD method applied to the gallery composed of multiple samples of each subject to represent the running time of the LC_KSVD method applied to the OSPP gallery. Moreover, when applied to methods [7, 21, 23, 31–33, 38, 40], the time required of one identification operation is easily affected by the scale of the OSPP gallery, that is, with the expansion of the scale of the OSPP gallery, the time required of one identification operation is easy to increase. To solve these problems, we propose an indicator: the time of one identification operation shared by each gallery subject (TOIOS‐EGS).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Similarly, SECOND [28] applies sparse convolution to get efficient features from voxels. F-PointNet [29] uses a 2D object detector on the image to make 2D object proposal first and then get corresponding frustum point cloud as the base of regression and prediction, while the accuracy of F-PointNet relies too much on the 2D detector.…”
Section: Related Workmentioning
confidence: 99%
“…( 27) else (28) add new voxel in hash table. (29) end if (30) end if (31) end for ALGORITHM 1: Voxelization algorithm. Mobile Information Systems form a square receptive field, the introduction of deformable convolution can better adapt to the changes of different directions of vehicles.…”
Section: Network Architecture Point Cloud Feature Aelectionmentioning
confidence: 99%