2020
DOI: 10.3390/rs12071142
|View full text |Cite
|
Sign up to set email alerts
|

Automatic 3D Landmark Extraction System Based on an Encoder–Decoder Using Fusion of Vision and LiDAR

Abstract: To provide a realistic environment for remote sensing applications, point clouds are used to realize a three-dimensional (3D) digital world for the user. Motion recognition of objects, e.g., humans, is required to provide realistic experiences in the 3D digital world. To recognize a user’s motions, 3D landmarks are provided by analyzing a 3D point cloud collected through a light detection and ranging (LiDAR) system or a red green blue (RGB) image collected visually. However, manual supervision is required to e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 23 publications
0
6
0
Order By: Relevance
“…It is predicted that a dense segmented image will be set as features of the surrounding 3D point cloud. When the density was corrected with the different image-based density correction [20], it was confirmed that the human object's shadow, clothes, and the background were similar, and noise was generated by the shaking of the RGB camera.…”
Section: Execution Results Of Generation For the Segmentation Imagementioning
confidence: 91%
See 2 more Smart Citations
“…It is predicted that a dense segmented image will be set as features of the surrounding 3D point cloud. When the density was corrected with the different image-based density correction [20], it was confirmed that the human object's shadow, clothes, and the background were similar, and noise was generated by the shaking of the RGB camera.…”
Section: Execution Results Of Generation For the Segmentation Imagementioning
confidence: 91%
“…It was confirmed that as the density increased, the number of incorrect 3D point clouds also increased. In the case of the difference image-based density correction [20], it was confirmed that the 3D point cloud suitable for the human object was not generated better than that generated by the proposed method, and the noise of the 3D point cloud was increased by 3.5 times.…”
Section: Execution Results Of Generation For the Segmentation Imagementioning
confidence: 95%
See 1 more Smart Citation
“…In addition, the transformation matrix is required during coordinate transformation, which is relatively cumbersome. 22 , 23 …”
Section: Introductionmentioning
confidence: 99%
“…Although the algorithms in the field of image processing can be used for reference or directly, the loss of spatial information will inevitably occur in this process. In addition, the transformation matrix is required during coordinate transformation, which is relatively cumbersome. , …”
Section: Introductionmentioning
confidence: 99%