2022
DOI: 10.1109/tpami.2021.3094662
|View full text |Cite
|
Sign up to set email alerts
|

Wasserstein Adversarial Regularization for Learning With Label Noise

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 19 publications
(18 citation statements)
references
References 34 publications
0
18
0
Order By: Relevance
“…R KL forces the local Lipschitz constant to be small with respect to the KL divergence. The adversarial regularization was first designed to solve semi-supervised learning problem (Miyato et al, 2019), but it has been proven to be efficient against label noise as the classifier learns over an interpolation of the label and the prediction (Fatras et al, 2021a). The adversarial direction is approximated using the power iteration algorithm (Golub & van der Vorst, 2000).…”
Section: Proposed Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…R KL forces the local Lipschitz constant to be small with respect to the KL divergence. The adversarial regularization was first designed to solve semi-supervised learning problem (Miyato et al, 2019), but it has been proven to be efficient against label noise as the classifier learns over an interpolation of the label and the prediction (Fatras et al, 2021a). The adversarial direction is approximated using the power iteration algorithm (Golub & van der Vorst, 2000).…”
Section: Proposed Methodsmentioning
confidence: 99%
“…Our classifier trained with adversarial regularization is able to detect the outliers in the measure α c . The hyperparameters used in the adversarial regularization algorithms are similar to one used in (Miyato et al, 2019;Fatras et al, 2021a) at the notable exception of η which is set to 10 like in the GAN experiment.…”
Section: B22 Architectures and Training Detailsmentioning
confidence: 99%
See 1 more Smart Citation
“…When a sample has noise or label error [22], the feature vector it maps on the highdimensional feature space will be far away from the feature group of the label class, which we call abnormal feature points. Under the interference of noise samples, the above method may be less effective.…”
Section: Anomaly Feature Detection Strategymentioning
confidence: 99%
“…This choice is key when using OT as a loss in learning [67] and inference [68]. Henceforth we describe the mini-batch framework of [69], for using OT as a loss.…”
Section: Mini-batch Optimal Transportmentioning
confidence: 99%