2018 IEEE International Conference on Real-Time Computing and Robotics (RCAR) 2018
DOI: 10.1109/rcar.2018.8621672
|View full text |Cite
|
Sign up to set email alerts
|

A Robust Feature Detection Method for an Infrared Single-Shot Structured Light System

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 10 publications
0
1
0
Order By: Relevance
“…Firstly, the 3D coordinates of multi-angle face feature matching points are obtained based on the face feature point detection results, then the initial registration parameters of multiple groups of 3D data are calculated, and then ICP is introduced for local optimal registration. The registered 3D point cloud data is calculated in the 3D normal direction, and then the overlapping point cloud is optimized and fused with the point cloud normal direction camera optical axis as the weight to retain high-quality point cloud data [26]. There are significant brightness and color differences in the texture data taken from multiple angles affected by the shooting angle and ambient light, and the color equalization of the stitching area and the normal weight are used for global texture correction.…”
Section: D Face Reconstructionmentioning
confidence: 99%
“…Firstly, the 3D coordinates of multi-angle face feature matching points are obtained based on the face feature point detection results, then the initial registration parameters of multiple groups of 3D data are calculated, and then ICP is introduced for local optimal registration. The registered 3D point cloud data is calculated in the 3D normal direction, and then the overlapping point cloud is optimized and fused with the point cloud normal direction camera optical axis as the weight to retain high-quality point cloud data [26]. There are significant brightness and color differences in the texture data taken from multiple angles affected by the shooting angle and ambient light, and the color equalization of the stitching area and the normal weight are used for global texture correction.…”
Section: D Face Reconstructionmentioning
confidence: 99%