2020
DOI: 10.1007/s41064-020-00114-z
|View full text |Cite
|
Sign up to set email alerts
|

Conditional Adversarial Networks for Multimodal Photo-Realistic Point Cloud Rendering

Abstract: We investigate whether conditional generative adversarial networks (C-GANs) are suitable for point cloud rendering. For this purpose, we created a dataset containing approximately 150,000 renderings of point cloud–image pairs. The dataset was recorded using our mobile mapping system, with capture dates that spread across 1 year. Our model learns how to predict realistically looking images from just point cloud data. We show that we can use this approach to colourize point clouds without the usage of any camera… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…The discriminator provides an adversarial loss, which could force the generated images towards the target manifold [38]. Such kinds of network architectures are currently often used for style transfer [38][39][40], image super-resolution [41], image inpainting, etc. The aim is often to generate photos as realistic as possible.…”
Section: Generative Adversarial Network (Gan)mentioning
confidence: 99%
“…The discriminator provides an adversarial loss, which could force the generated images towards the target manifold [38]. Such kinds of network architectures are currently often used for style transfer [38][39][40], image super-resolution [41], image inpainting, etc. The aim is often to generate photos as realistic as possible.…”
Section: Generative Adversarial Network (Gan)mentioning
confidence: 99%
“…Another potential approach is to use GANs (Generalized Adversarial Networks) in order to synthesize a new representation based on a given input (see e.g. Isola et al (2017); Peters and Brenner (2018)).…”
Section: Discussion and Outlook On Future Workmentioning
confidence: 99%
“…This information has been exploited in another research work to synthesize realistic visualizations from Lidar data projections. In order to do so, a conditional GAN was trained with pairs of Lidar and camera-data, including a time-stamp (Peters and Brenner 2018). In this way it is possible to not only generate realistic views (Figure 4), but also realistic views at different times of the year, e.g.…”
Section: Acquisition Of Environmental Informationmentioning
confidence: 99%