2022
DOI: 10.1007/978-3-031-19784-0_15
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Feature Interpolation for Low-Shot Image Generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 22 publications
0
5
0
Order By: Relevance
“…These issues result in reduced fidelity and unstable training processes. Recent fewshot improvement methods can be divided into the following three categories, i.e., data augmentation (Dai, Hang, and Guo 2022;Jeong and Shin 2021;Karras et al 2020;Zhao et al 2020), regularization (Kong et al 2022;Yang et al 2021;Zhang et al 2019a;Zhao et al 2021), and transfer learning (Mo, Cho, and Shin 2020;Liu et al 2019;Ojha et al 2021). Nevertheless, among prior optimization techniques for few-shot generation, there is still a lack of identity-controllable palmprint generation methods.…”
Section: Few-shot Generationmentioning
confidence: 99%
“…These issues result in reduced fidelity and unstable training processes. Recent fewshot improvement methods can be divided into the following three categories, i.e., data augmentation (Dai, Hang, and Guo 2022;Jeong and Shin 2021;Karras et al 2020;Zhao et al 2020), regularization (Kong et al 2022;Yang et al 2021;Zhang et al 2019a;Zhao et al 2021), and transfer learning (Mo, Cho, and Shin 2020;Liu et al 2019;Ojha et al 2021). Nevertheless, among prior optimization techniques for few-shot generation, there is still a lack of identity-controllable palmprint generation methods.…”
Section: Few-shot Generationmentioning
confidence: 99%
“…For the conditional image generation task, it is necessary to introduce additional conditional information to the GAN model to assist the generation, and such information are usually class labels. In this task, the gradient normalization is calculated as shown in Formula (7).…”
Section: Discriminator Architecturementioning
confidence: 99%
“…In recent years, there has been an ongoing intense competition between diffusion models [1-3] and generative adversarial networks (GANs) [4] in various domains, including image generation [5][6][7][8], image super-resolution [9][10][11][12], style transfer [13][14][15][16], image transformation [17][18][19], image-to-image translation [20][21][22], and adversarial attack [23,24]. This competition has greatly pushed and is continuously pushing the development of generative models.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Compared to other generative models, DCGAN not only enhances the fidelity of generated images but also ensures stable training in deeper networks. However, stable training of the DCGAN requires a substantial dataset, otherwise, issues such as low image fidelity and model collapse may occur [15][16][17]. In order to consistently generate BSTIs, we propose the use of BSTI-DCGAN.…”
Section: Introductionmentioning
confidence: 99%