2014
DOI: 10.1016/j.jvcir.2013.06.008
|View full text |Cite
|
Sign up to set email alerts
|

Fusion of 3D-LIDAR and camera data for scene parsing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
45
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 61 publications
(45 citation statements)
references
References 34 publications
0
45
0
Order By: Relevance
“…This approach provides a finer representation of the environment, but at the expense of increased processing time and reduced memory efficiency. To mitigate this, usually a voxel-based filtering mechanism is applied to the raw point cloud to reduce the number of points, e.g., [24,25].…”
Section: Representationmentioning
confidence: 99%
See 2 more Smart Citations
“…This approach provides a finer representation of the environment, but at the expense of increased processing time and reduced memory efficiency. To mitigate this, usually a voxel-based filtering mechanism is applied to the raw point cloud to reduce the number of points, e.g., [24,25].…”
Section: Representationmentioning
confidence: 99%
“…In [24,27,32,49,50], the authors implemented the RANSAC algorithm to segment the ground plane in the point cloud with the assumption of flat surface. However, as mentioned in [23,51], for non-planar surfaces, such as undulated roads, uphill, downhill, and humps, this model fitting method is not adequate.…”
Section: Segmentation Algorithmsmentioning
confidence: 99%
See 1 more Smart Citation
“…Besides the study of segmenting 3D point clouds [22], researchers have also started working on fusion based scene parsing. They often conduct segmentation on an image and a 3D point cloud individually, while paying attention to integrating two segmentation results via fusion techniques such as the fuzzy logic inference framework [23].…”
Section: Related Workmentioning
confidence: 99%
“…The above-mentioned appearance and geometric features are extracted to classify each grid into traversable or non-traversable categories. The other method fuses multi-modalities at decision level [12]. Specifically, it classifies each sensor's data individually and then combines the classification results by a fusion scheme.…”
Section: Introductionmentioning
confidence: 99%