2022
DOI: 10.1016/j.compbiomed.2022.105952
|View full text |Cite
|
Sign up to set email alerts
|

Improved GAN: Using a transformer module generator approach for material decomposition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(12 citation statements)
references
References 21 publications
0
12
0
Order By: Relevance
“…Likely, Y. Jiang, Chang, and Wang (2021), G. Wang et al (2022), B. Zhang, Gu, et al (2022 focused on the combination of GAN and transformers to stabilize GAN training and enhance the quality of the generated samples. Also, Zadorozhnyy et al (2021) proposed loss functions to modify the training gradient and support high discriminator scores for the real data.…”
Section: Training Instabilitymentioning
confidence: 99%
See 1 more Smart Citation
“…Likely, Y. Jiang, Chang, and Wang (2021), G. Wang et al (2022), B. Zhang, Gu, et al (2022 focused on the combination of GAN and transformers to stabilize GAN training and enhance the quality of the generated samples. Also, Zadorozhnyy et al (2021) proposed loss functions to modify the training gradient and support high discriminator scores for the real data.…”
Section: Training Instabilitymentioning
confidence: 99%
“…The authors' idea is based on taking advantage of the auto‐encoder to learn the discriminator with a reduced representation of the given data and getting the benefits of the auto‐encoder representative features and the discriminator's discriminative features to regularize the discriminator and then stabilize GAN training. Likely, Y. Jiang, Chang, and Wang (2021), G. Wang et al (2022), B. Zhang, Gu, et al (2022) focused on the combination of GAN and transformers to stabilize GAN training and enhance the quality of the generated samples. Also, Zadorozhnyy et al (2021) proposed loss functions to modify the training gradient and support high discriminator scores for the real data.…”
Section: Gan Challengesmentioning
confidence: 99%
“…e optimal discriminator D * (x) is obtained by using the loss function in (12), fixing the generator G, and deriving it for the discriminator D(x), as shown in…”
Section: Data Discriminator Modulementioning
confidence: 99%
“…Compare the anomaly detection performance of DUAL-ADGAN with the other nine unsupervised anomaly detection baseline models on three datasets, RealAdExchange-CPC, RealTraffic-SPEED, and Real-Traffic-TravelTime. (3) For epochs do (4) Feed the noise vector z into the generator G W to generate the data G W (5) Feed the generated data G W (z) and the real data x into the discriminator D W (6) Training G W and D W with WGAN-GP loss function (7) Return G W (8) If model is Fence-GAN: (9) For epochs do (10) Feed the noise vector z into the generator G F to generate the data G F (z) (11) Feed the generated data G F (z) and the real data x into the discriminator D F (12) Training G F and D F with Fence-GAN loss function (13) Step 2.…”
Section: Experimental Protocolmentioning
confidence: 99%
See 1 more Smart Citation