2019 IEEE Intelligent Vehicles Symposium (IV) 2019
DOI: 10.1109/ivs.2019.8814047
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Sensor Deep Domain Adaptation for LiDAR Detection and Segmentation

Abstract: A considerable amount of annotated training data is necessary to achieve state-of-the-art performance in perception tasks using point clouds. Unlike RGB-images, LiDAR point clouds captured with different sensors or varied mounting positions exhibit a significant shift in their input data distribution. This can impede transfer of trained feature extractors between datasets as it degrades performance vastly.We analyze the transferability of point cloud features between two different LiDAR sensor set-ups (32 and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 46 publications
(22 citation statements)
references
References 25 publications
0
22
0
Order By: Relevance
“…Strong-weak Distribution Alignment [2] put the alignment focus on globally similar data and promote the consistency of local structural information. To deal with 3D point cloud, Rist et al [29] proposed a cross-sensor domain adaptation method and demonstrated that the dense 3D voxels can better model sensor invariance features. SqueezeSegv2 [30] utilizes a simulation engine to generate labeled synthetic data.…”
Section: Unsupervised Domain Adaptationmentioning
confidence: 99%
“…Strong-weak Distribution Alignment [2] put the alignment focus on globally similar data and promote the consistency of local structural information. To deal with 3D point cloud, Rist et al [29] proposed a cross-sensor domain adaptation method and demonstrated that the dense 3D voxels can better model sensor invariance features. SqueezeSegv2 [30] utilizes a simulation engine to generate labeled synthetic data.…”
Section: Unsupervised Domain Adaptationmentioning
confidence: 99%
“…A metric that can indicate high degree of separation between Gaussian distribution is Kullback-Leibler (KL) divergence [43]. Generally, for distributions N 1 (µ 1 , σ 1 ), N 2 (µ 2 , σ 2 ), KL divergence increases with larger mean difference and smaller standard deviations as can be seen in Equation (1). By calculating the mean KL-divergence (MKL) over all distances with sufficient points, a metric for the quality of intensity distributions is obtained, as shown in Table 7.…”
Section: E Intensitymentioning
confidence: 99%
“…In practice, a large variety of LiDARs are used; it is unclear how each might perform on software 0 designed for different hardware, given the aforementioned variety. This is especially true in the context of deep learning models for object detection and point cloud segmentation, trained on specific sensor data and generally not transferable to other LiDARs [1]. Rather than testing algorithms on every LiDAR, it is preferable to establish some desired sensor data characteristics for each algorithm.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…{eduardo.corral.soto, mrigank.rochan1, yannis.yiming.he, shubhra.aich1, yang.liu, liu.bingbing }@huawei.com can be classified into: 1) Discrepancy-based methods, which include the Maximum Mean Discrepancy (MMD) [10] and DeepCORAL [11], which minimize the global mean or covariance matrix discrepancy between the source and target domains, 2) Adversarial-based methods, which typically employ GANs and discriminators to reduce the domain shift via domain translation, and 3) Reconstruction-based methods, which use auxiliary reconstruction tasks to encourage feature invariance. A more recent LiDAR-focused domain adaptation survey [12] classifies methods into: 1) Domain-invariant data representation methods [13], [14], mainly based on hand-crafted data preprocessing to move different domains into a common representation (e.g. LiDAR data rotation and normalization), 2) Domain-invariant feature learning for finding a common representation space for the source and target domains [15], [16], 3) Normalization statistics that attempt to align the domain distributions by a normalization of the mean and variance of activations, and 4) Domain mapping, where source data is transformed, usually using GANs or adversarial training to appear like target data [17], [18], [19].…”
Section: Introduction and Prior Workmentioning
confidence: 99%