2022
DOI: 10.1007/978-3-031-19827-4_42
|View full text |Cite
|
Sign up to set email alerts
|

Quasi-Balanced Self-Training on Noise-Aware Synthesis of Object Point Clouds for Closing Domain Gap

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 38 publications
0
3
0
Order By: Relevance
“…mode-specific normalization and conditional generator, can be applied to relational data, the generated samples by GANs are typically lack of semantic labels, which therefore cannot be used to supervise deep models. Selection and annotation of high-confident generated samples should be a powerful option, which has been investigated in semi-supervised learning and unsupervised domain adaptation (Zhang et al 2020;Zou et al 2021;Chen et al 2022). However, little attention has been devoted to the differentiable synthesis of rare relational data in the context of imbalance learning (Wang et al 2020).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…mode-specific normalization and conditional generator, can be applied to relational data, the generated samples by GANs are typically lack of semantic labels, which therefore cannot be used to supervise deep models. Selection and annotation of high-confident generated samples should be a powerful option, which has been investigated in semi-supervised learning and unsupervised domain adaptation (Zhang et al 2020;Zou et al 2021;Chen et al 2022). However, little attention has been devoted to the differentiable synthesis of rare relational data in the context of imbalance learning (Wang et al 2020).…”
Section: Introductionmentioning
confidence: 99%
“…In specific, the generator of a GAN can produce a large amount of unlabeled data (samples), we use the confidence of the discriminator on a synthetic sample to reflect its similarity to the real samples, and only the highconfident samples will be kept. For semantic pseudo labeling (Zou et al 2021;Chen et al 2022), our solution is simple yet effective: label quality is supported by a classifier commit-tee consisting of multiple off-the-shelf pre-trained shallow classifiers which enable the representations of the relationships between the attribute set and the class labels from diverse perspectives, and the majority vote of class predictions is assigned as the semantic pseudo label to a high-confident sample.…”
Section: Introductionmentioning
confidence: 99%
“…Fan et al [92] designed a voting strategy that pseudolabels target samples by searching for the nearest source neighbours in a shared feature space. Chen et al [93] proposed quasi-balanced self-training to address the class imbalance in pseudo-labelling. Cardace et al [94] proposed to refine noisy pseudo-labels by matching shape descriptors that are learned by the unsupervised task of shape reconstruction on both domains.…”
Section: Label-efficient Learning Of Point Cloudsmentioning
confidence: 99%