2022
DOI: 10.1109/tnnls.2020.3042380
|View full text |Cite
|
Sign up to set email alerts
|

Segmented Generative Networks: Data Generation in the Uniform Probability Space

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…Specifically, to enhance the diversity and expressiveness of the generated digit 5, we set the weight of the Laplace distribution to 0.4 while setting the weight of the other distributions to 0.2. et al, 2020), SGN (Letizia & Tonello, 2022), WGAN-GP (Gulrajani et al, 2017) and MDM-GAN. To be fair, all of these models were trained with the same training samples and the same number of epochs.…”
Section: Mnist Data Setmentioning
confidence: 99%
See 1 more Smart Citation
“…Specifically, to enhance the diversity and expressiveness of the generated digit 5, we set the weight of the Laplace distribution to 0.4 while setting the weight of the other distributions to 0.2. et al, 2020), SGN (Letizia & Tonello, 2022), WGAN-GP (Gulrajani et al, 2017) and MDM-GAN. To be fair, all of these models were trained with the same training samples and the same number of epochs.…”
Section: Mnist Data Setmentioning
confidence: 99%
“…Recently, the author of (Sauer et al, 2023(Sauer et al, , 2022 proposed that faster convergence can be achieved by projecting real and synthetic samples into the input space of a discriminator network with multiple scales, using a pre-trained feature network. Similarly, the author of (Letizia & Tonello, 2022) decoupled the generation process into two segments to improve global understanding and ensure a steady generation process. In spite of this, the segmented generative network may not be appropriate for fitting complex and diverse real data distribution as it only uses single uniform vectors as input.…”
Section: Introductionmentioning
confidence: 99%