2020
DOI: 10.1109/access.2020.3028067
|View full text |Cite
|
Sign up to set email alerts
|

Memory Protection Generative Adversarial Network (MPGAN): A Framework to Overcome the Forgetting of GANs Using Parameter Regularization Methods

Abstract: Generative adversarial networks (GANs) suffer from catastrophic forgetting when learning multiple consecutive tasks. Parameter regularization methods that constrain the parameters of the new model in order to be close to the previous model through parameter importance are effective in overcoming forgetting. Many parameter regularization methods have been tried, but each of them is only suitable for limited types of neural networks. Aimed at GANs, this paper proposes a unified framework called Memory Protection… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 17 publications
0
1
0
Order By: Relevance
“…There are several works that train GANs in the continual learning scenarios either with memory replay [49] or regularization [4], with the extension to VAEGAN [22] in [53].…”
Section: Continual Learning Of Generative Modelsmentioning
confidence: 99%
“…There are several works that train GANs in the continual learning scenarios either with memory replay [49] or regularization [4], with the extension to VAEGAN [22] in [53].…”
Section: Continual Learning Of Generative Modelsmentioning
confidence: 99%