2018
DOI: 10.3390/s18082571
|View full text |Cite
|
Sign up to set email alerts
|

Towards a Meaningful 3D Map Using a 3D Lidar and a Camera

Abstract: Semantic 3D maps are required for various applications including robot navigation and surveying, and their importance has significantly increased. Generally, existing studies on semantic mapping were camera-based approaches that could not be operated in large-scale environments owing to their computational burden. Recently, a method of combining a 3D Lidar with a camera was introduced to address this problem, and a 3D Lidar and a camera were also utilized for semantic 3D mapping. In this study, our algorithm c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 27 publications
(14 citation statements)
references
References 32 publications
0
14
0
Order By: Relevance
“…Wang and Kim [34] use images and 3D point clouds from the KITTI dataset [10] to jointly estimate road layout and segment urban scenes semantically by applying a relative location prior. Jeong et al [11], [12] also propose a multi-modal sensor-based semantic 3D mapping system to improve the segmentation results in terms of the intersection-over-union (IoU) metric, in large-scale environments as well as in environments with few features. Liang et al [16] propose a novel 3D object detector that can exploit both LiDAR and camera data to perform accurate object localization.…”
Section: Introductionmentioning
confidence: 99%
“…Wang and Kim [34] use images and 3D point clouds from the KITTI dataset [10] to jointly estimate road layout and segment urban scenes semantically by applying a relative location prior. Jeong et al [11], [12] also propose a multi-modal sensor-based semantic 3D mapping system to improve the segmentation results in terms of the intersection-over-union (IoU) metric, in large-scale environments as well as in environments with few features. Liang et al [16] propose a novel 3D object detector that can exploit both LiDAR and camera data to perform accurate object localization.…”
Section: Introductionmentioning
confidence: 99%
“…In this scenario, a fleet of vehicles could all contribute to the same exploration, requiring collaborative behaviors for coordination and information sharing between them [39] or even to offload the most computationally intensive tasks to an external infrastructure (cloud robotics) [40]. Lastly, possible combining the RGB data, the robot could detect a new object placed before it, to then classify it [41]. This classification would be useful for different tasks, for example, choosing the proper gripper configuration to grab it or to place it in the desired container, if we think of a task where the goal of the robot is to separate the objects by categories.…”
Section: Resultsmentioning
confidence: 99%
“…With the recent developments in LiDAR technology, numerous mapping systems have been proposed in the SLAM community [ 38 , 39 , 40 , 41 , 42 , 43 , 44 ]. Although LiDARs are precise and less noisy than RGB-D sensors, it remains challenging to build efficient 3D maps with such sensors.…”
Section: Related Workmentioning
confidence: 99%