2018
DOI: 10.1007/978-3-030-01228-1_6
|View full text |Cite
|
Sign up to set email alerts
|

Generalized Loss-Sensitive Adversarial Learning with Manifold Margins

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
2
1
1

Relationship

2
8

Authors

Journals

citations
Cited by 25 publications
(11 citation statements)
references
References 7 publications
0
10
0
1
Order By: Relevance
“…For this purpose, the idea of adversarially training a generator and its corresponding encoder has been independently developed in Bidirectional Generative Adversarial Networks (BiGAN) [15] and Adversarially Learned Inference (ALI) [16], respectively. The idea is later integrated into a regularized loss-sensitive GAN model with proved distributional consistency and generalizability to generate real data [70].…”
Section: Gan-based Representationsmentioning
confidence: 99%
“…For this purpose, the idea of adversarially training a generator and its corresponding encoder has been independently developed in Bidirectional Generative Adversarial Networks (BiGAN) [15] and Adversarially Learned Inference (ALI) [16], respectively. The idea is later integrated into a regularized loss-sensitive GAN model with proved distributional consistency and generalizability to generate real data [70].…”
Section: Gan-based Representationsmentioning
confidence: 99%
“…III. THE PROPOSED METHOD As we all know, the original data contains much useful information, such as the given label information and the underlying geometric information residing in the data [35][36][37][38]. If we can appropriately utilize these information to guide the projection learning, then a better classification performance will be obtained.…”
Section: Discriminatively Regularized Least-squaresmentioning
confidence: 99%
“…There have been many efforts [38,39] to employ manifold properties to stabilize GAN training process and improve the quality of generated samples but none of them benefit from smart samples selection to expedite the training as suggested in the first column of Fig. 1.…”
Section: Training Ganmentioning
confidence: 99%