2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2019
DOI: 10.1109/iros40897.2019.8968535
|View full text |Cite
|
Sign up to set email alerts
|

Deep Generative Modeling of LiDAR Data

Abstract: Building models capable of generating structured output is a key challenge for AI and robotics. While generative models have been explored on many types of data, little work has been done on synthesizing lidar scans, which play a key role in robot mapping and localization. In this work, we show that one can adapt deep generative models for this task by unravelling lidar scans into a 2D point map. Our approach can generate high quality samples, while simultaneously learning a meaningful latent representation of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
29
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 39 publications
(29 citation statements)
references
References 37 publications
0
29
0
Order By: Relevance
“…These deep networks have shown promising performances on object classification and scene understanding, but none have been applied to radar signal processing. Although there are studies on extracting points from RGB-D [41] or LiDAR scanners [42], constructing and analyzing points from a range-Doppler radar system are different concepts. Instead of directly processing the input points, we add an intensity-based reconstruction module before the hierarchical PointNet model, which incorporates the radar domain knowledge into the 3D network.…”
Section: Related Workmentioning
confidence: 99%
“…These deep networks have shown promising performances on object classification and scene understanding, but none have been applied to radar signal processing. Although there are studies on extracting points from RGB-D [41] or LiDAR scanners [42], constructing and analyzing points from a range-Doppler radar system are different concepts. Instead of directly processing the input points, we add an intensity-based reconstruction module before the hierarchical PointNet model, which incorporates the radar domain knowledge into the 3D network.…”
Section: Related Workmentioning
confidence: 99%
“…In semantic point segmentation [1,2] and object detection tasks [3], these approaches have gained performance in practice. As for GANs with LiDAR data, Caccia et al [10] trained a GAN generator with Cartesian points projected onto sensor-view 2D grids.…”
Section: Introductionmentioning
confidence: 99%
“…The goal of this study is to build a generative model of LiDAR data based on the 2D representation [1]- [3,10,11]. The existing work on GANs [10] relied on preprocessing LiDAR data such as using heuristics to fill in the pointdrops. Moreover, the quantitative performance has not yet been evaluated in terms of quality and diversity.…”
Section: Introductionmentioning
confidence: 99%
“…In order to leverage mature 2D detection models to extract visual cues from multiple signals, we first convert ambient and reflectance signals into 2D image representation called "LiDAR image". Unlike previous methods where point cloud is projected into image space with the calibration matrix, we re-organize the LiDAR multi-modal measurement into a 2D image according to the alignment of LiDAR detector array (i.e., range view[1]), as shown inFig 1. Specifically, each column of pixels are signals captured by the vertically aligned LiDAR detectors and each row of pixels are signals captured by the same detector in different horizontal directions.…”
mentioning
confidence: 99%