2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00257
|View full text |Cite
|
Sign up to set email alerts
|

Label-Noise Robust Generative Adversarial Networks

Abstract: Recently, generative adversarial networks (GANs), which learn data distributions through adversarial training, have gained special attention owing to their high image reproduction ability. However, one limitation of standard GANs is that they recreate training images faithfully despite image degradation characteristics such as blur, noise, and compression. To remedy this, we address the problem of blur, noise, and compression robust image generation. Our objective is to learn a non-degraded image generator dir… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
36
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 56 publications
(36 citation statements)
references
References 84 publications
(301 reference statements)
0
36
0
Order By: Relevance
“…2) Generative Adversarial Networks: GANs are popular in a variety of application domains, including photorealistic image super-resolution [23], image inpainting [24], text to image synthesis [25]. Several studies have shown promising results for reproducing data, in spite of labels corrupted by random noise [7], [26], [27]. Generative adversarial networks learn a deep generative model that is able to synthesize high dimensional data samples.…”
Section: )mentioning
confidence: 99%
See 1 more Smart Citation
“…2) Generative Adversarial Networks: GANs are popular in a variety of application domains, including photorealistic image super-resolution [23], image inpainting [24], text to image synthesis [25]. Several studies have shown promising results for reproducing data, in spite of labels corrupted by random noise [7], [26], [27]. Generative adversarial networks learn a deep generative model that is able to synthesize high dimensional data samples.…”
Section: )mentioning
confidence: 99%
“…Generative adversarial networks (GAN) [6], which is a special category of generative models, can learn data distributions through adversarial learning, even in the presence of noise. Therefore, they are capable of representing noisy time series with high consistency [7]. Given that GAN input is a noise vector, the model is inherently more robust to noisy input signals and less sensitive to inconsistencies or gaps in the training input samples.…”
Section: Introductionmentioning
confidence: 99%
“…6 However, since the training data are selected on the fly rather than in the beginning, it is hard to characterize these sample-selection biases, and it is also difficult to give any theoretical guarantee on the learning consistency. The noise transition matrix-based approaches 5 assume that there is a single transition probability between the noisy label and the ground-truth label, and this probability is independent of the individual samples. However, in real-world cases, the appearance of each sample has much influence on whether it will be misclassified.…”
Section: A Related Workmentioning
confidence: 99%
“…Note that the top row illustrates that the tumor core (TC) is easily annotated as the whole tumor (WT), and the bottom row shows that the enhancing tumor (ET) has a high similarity with the TC. Most approaches 5 model the probability of the label masks with errors and learn the error transition matrix (the probability of the label masks with errors conditioned the true label mask). However, in real cases, the appearance of each sample has much influence on whether it can be misclassified.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation