2021
DOI: 10.1109/lra.2021.3105721
|View full text |Cite
|
Sign up to set email alerts
|

ChangeGAN: A Deep Network for Change Detection in Coarsely Registered Point Clouds

Abstract: In this paper we introduce a novel change detection approach called ChangeGAN for coarsely registered point clouds in complex street-level urban environment.Our generative adversarial network-like (GAN) architecture compounds Siamese-style feature extraction, U-net-like use of multiscale features, and Spatial Trans-formation Network (STN) blocks for optimal transformation estimation. The input point clouds are represented by range images, which enables the use of 2D convolutional neural networks. The result is… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 21 publications
0
6
0
Order By: Relevance
“…Creating datasets for CD is more complicated than other tasks; it requires traversing point clouds not just at one epoch, but at two or more epochs. Given the lack of annotated data, to properly train the models, researchers use a variety of strategies, including transfer learning [156], data augmentation [157,158], and Point Clouds Generative Adversarial Networks (PC-GAN) [159,160]. Although these techniques alleviate some of the problems, coupled with a lack of samples, further improvements are still needed.…”
Section: Discussion and Perspectivesmentioning
confidence: 99%
“…Creating datasets for CD is more complicated than other tasks; it requires traversing point clouds not just at one epoch, but at two or more epochs. Given the lack of annotated data, to properly train the models, researchers use a variety of strategies, including transfer learning [156], data augmentation [157,158], and Point Clouds Generative Adversarial Networks (PC-GAN) [159,160]. Although these techniques alleviate some of the problems, coupled with a lack of samples, further improvements are still needed.…”
Section: Discussion and Perspectivesmentioning
confidence: 99%
“…The human pose estimation task can be applied in surveillance applications, which demand real-time solutions. To address this need, our approach involves transforming the representation of the NRCS lidar point cloud from 3D Cartesian coordinates to a spherical polar coordinate system, similar to our previous works in [ 39 , 40 ]. We generate a 2D pixel grid by discretizing the horizontal and vertical FoVs, where each 3D point’s distance from the sensor is mapped to a pixel determined by corresponding azimuth and elevation values.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…Range images are widely used, compact representations of Lidar-based depth measurements [1], [3], [16], which enable to adopt 2D convolution operations and effective image-based neural network architectures [4], [6] during processing.…”
Section: A Range Image Generationmentioning
confidence: 99%
“…For dynamic environment perception and recognition tasks such as advanced scene analysis and understanding, repeti-tive, typically rotating multi-beam (RMB) Lidar sensors (e.g., Ouster OS1 or Velodyne Puck models) [1] are commonly utilized devices. RMB Lidars can produce real-time point cloud streams (300k-2M points/s), however, their measurements have low spatial density, and their field of view (FoV) coverage is constant through the whole scanning process: Their vertical resolution is fixed by the number of the laser beams , while their horizontal resolution depends on the sensor's rotation frequency (5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20).…”
Section: Introductionmentioning
confidence: 99%