2022
DOI: 10.1007/978-3-031-19784-0_27
|View full text |Cite
|
Sign up to set email alerts
|

IntereStyle: Encoding an Interest Region for Robust StyleGAN Inversion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 32 publications
0
2
0
Order By: Relevance
“…To apply this extraordinary latent space to real images, several works [1,2,35,40,36] have bridged the relationship between real and fake latent spaces through a process known as GAN inversion. GAN inversion methods can typically be divided into optimization-based [1,2,10,34] and encoder-based [35,40,3,47,4,49,33]. While optimization-based models exhibit high inversion quality, they require numerous optimization steps for each input image [23], resulting in significant time consumption.…”
Section: Related Workmentioning
confidence: 99%
“…To apply this extraordinary latent space to real images, several works [1,2,35,40,36] have bridged the relationship between real and fake latent spaces through a process known as GAN inversion. GAN inversion methods can typically be divided into optimization-based [1,2,10,34] and encoder-based [35,40,3,47,4,49,33]. While optimization-based models exhibit high inversion quality, they require numerous optimization steps for each input image [23], resulting in significant time consumption.…”
Section: Related Workmentioning
confidence: 99%
“…Encoder-based approaches (Richardson et al 2021;Tov et al 2021;Hu et al 2022;Mao et al 2022) map latent codes more quickly through a learnable encoder, with better editability but worse fidelity. To keep more details, (Wei et al 2022;Moon and Park 2022) complement latent codes with the differences between reconstructed and source images. Other works (Roich et al 2022;Dinh et al 2022) have attempted to fine-tune the generator.…”
Section: Introductionmentioning
confidence: 99%