2021
DOI: 10.48550/arxiv.2108.06682
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ST3D++: Denoised Self-training for Unsupervised Domain Adaptation on 3D Object Detection

Abstract: In this paper, we present a self-training method, named ST3D++, with a holistic pseudo label denoising pipeline for unsupervised domain adaptation on 3D object detection. ST3D++ aims at reducing noise in pseudo label generation as well as alleviating the negative impacts of noisy pseudo labels on model training. First, ST3D++ pre-trains the 3D object detector on the labeled source domain with random object scaling (ROS) which is designed to reduce target domain pseudo label noise arising from object scale bias… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 57 publications
0
7
0
Order By: Relevance
“…MLC-Net [14] leverages the mean-teacher paradigm with three levels of consistency to facilitate the cross-domain transfer. The self-denoising frameworks, ST3D [33] and ST3D++ [32], employ three strategies (i.e., random object scaling, hybrid quality-aware triplet memory, and curriculum data augmentation), to reduce noise in pseudo label generation. Though achieving promising results, most of the aforementioned domain adaptation methods ignore the target data distribution when training the model with source data.…”
Section: Domain Adaptative 3d Object Detectionmentioning
confidence: 99%
“…MLC-Net [14] leverages the mean-teacher paradigm with three levels of consistency to facilitate the cross-domain transfer. The self-denoising frameworks, ST3D [33] and ST3D++ [32], employ three strategies (i.e., random object scaling, hybrid quality-aware triplet memory, and curriculum data augmentation), to reduce noise in pseudo label generation. Though achieving promising results, most of the aforementioned domain adaptation methods ignore the target data distribution when training the model with source data.…”
Section: Domain Adaptative 3d Object Detectionmentioning
confidence: 99%
“…Domain gaps for different datasets depend on object size, weather condition, specific locations, and orientation [57]. Overall gaps can be categorized in to two categories mentioned below.…”
Section: B Datasetsmentioning
confidence: 99%
“…2) nuScenes dataset: nuScenes dataset contains 1000 segments of 20 seconds each for 3D object detection where 750, 150 and 150 segments for training, validation and testing, respectively [10,68,69,57,70]. Annotation rate is 2Hz for which 28k, 6k and 6k annotated frames are available in these datasets for training, validation and testing respectively.…”
Section: Lossmentioning
confidence: 99%
“…Wang et al [56] proposed a semi-supervised approach using object-size statistics of the target domain to resize training samples in the labelled source domain. A popular approach is the use of self-training [43,63,64,67] with a focus on generating quality pseudo-labels using temporal information [43,67] or an IoU scoring criterion for historical pseudo-labels [63,64]. In particular, while Yang et al [63,64] has drastically improved the performance over previous works, it is not practical for a lidar that can adjust its scan pattern in real-time.…”
Section: Related Workmentioning
confidence: 99%
“…A popular approach is the use of self-training [43,63,64,67] with a focus on generating quality pseudo-labels using temporal information [43,67] or an IoU scoring criterion for historical pseudo-labels [63,64]. In particular, while Yang et al [63,64] has drastically improved the performance over previous works, it is not practical for a lidar that can adjust its scan pattern in real-time. The method would need to be fine-tuned for every adjustment of the scan pattern, and in practice, this fine-tuned model would need to be constantly swapped according to the adjusted scan.…”
Section: Related Workmentioning
confidence: 99%