2022
DOI: 10.26599/air.2022.9150005
|View full text |Cite
|
Sign up to set email alerts
|

Self-Sparse Generative Adversarial Networks

Abstract: Generative adversarial networks (GANs) are an unsupervised generative model that learns data distribution through adversarial training. However, recent experiments indicated that GANs are difficult to train due to the requirement of optimization in the high dimensional parameter space and the zero gradient problem. In this work, we propose a self-sparse generative adversarial network (Self-Sparse GAN) that reduces the parameter space and alleviates the zero gradient problem. In the Self-Sparse GAN, we design a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 25 publications
0
2
0
Order By: Relevance
“…In addition, the proposed refinement module is compared against the Self-Adaptive Sparse Transform Module (SASTM) proposed by Qian et al due to the use of both channel and position attention weights. 67 One difference between the SASTM and the proposed refinement module is that the refinement module uses multi-layer feature maps to refine individual layers of the decoder as compared to inserting the SASTM layer after every deconvolution layer. The SASTM layer is incorporated into networks such as PANet, U-Net and DeepLab v3+ to compare the performance of using SASTM and the proposed refinement module.…”
Section: Resultsmentioning
confidence: 99%
“…In addition, the proposed refinement module is compared against the Self-Adaptive Sparse Transform Module (SASTM) proposed by Qian et al due to the use of both channel and position attention weights. 67 One difference between the SASTM and the proposed refinement module is that the refinement module uses multi-layer feature maps to refine individual layers of the decoder as compared to inserting the SASTM layer after every deconvolution layer. The SASTM layer is incorporated into networks such as PANet, U-Net and DeepLab v3+ to compare the performance of using SASTM and the proposed refinement module.…”
Section: Resultsmentioning
confidence: 99%
“…In Equation ( 1), FA is the false acceptance rate, FR is the false rejection rate, fa C denotes the false rejection cost, fr C denotes the false reception cost, Im p P denotes the Figure 1 is a massive voice data resource database, which is mainly composed of telephone voice, Internet voice, and other voice, the three sub-speaker voice databases. In storage facing massive speaker representation, distributed storage is the interface for storing external file upload, file download, file modification, and file deletion [16]. Distributed storage is to create the voiceprint sublibrary according to the gender, age, and other information of the speaker, which is the most concise and effective way to build the voiceprint word library.…”
Section: A Voice Information Database Of the Speaker Recognition Systemmentioning
confidence: 99%