2022
DOI: 10.1109/tip.2022.3167305
|View full text |Cite
|
Sign up to set email alerts
|

E2Style: Improve the Efficiency and Effectiveness of StyleGAN Inversion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 50 publications
(18 citation statements)
references
References 54 publications
0
18
0
Order By: Relevance
“…There are usually two ways to restore latent codes. Researches such as PULSE and Image2stylegan [23], [24] use the method of optimizing loss, and methods such as pSp [25], e4e [26], E2Style [27] train an encoder through a large number of samples.…”
Section: Use Of Pre-trained Gansmentioning
confidence: 99%
“…There are usually two ways to restore latent codes. Researches such as PULSE and Image2stylegan [23], [24] use the method of optimizing loss, and methods such as pSp [25], e4e [26], E2Style [27] train an encoder through a large number of samples.…”
Section: Use Of Pre-trained Gansmentioning
confidence: 99%
“…Wei et al [WCZ*21] and Alaluf et al [APCO21b] find that a complex hierarchical encoder is unnecessary, especially in unstructured domains (e.g., cars, churches, horses) and instead propose simpler backbones. Wei et al [WCZ*21] further replace the 18 map2style blocks with a simple block comprised of a single average pooling layer and fully connected layer. Rather than encoding an image into a set of style vectors , Kim et al [KCK*21] instead invert images into an intermediate latent space with a spatial dimension, resulting in more accurate reconstructions compared to other encoder networks.…”
Section: Encoding and Inversionmentioning
confidence: 99%
“…Tov et al [TAN*21] extend this to additional domains by employing a similarity loss based on a MoCo [CFGH20] feature extractor pre‐trained on ImageNet. Wei et al [WCZ*21] utilize a pre‐trained face parsing network to achieve more localized supervision during the encoder training.…”
Section: Encoding and Inversionmentioning
confidence: 99%
See 2 more Smart Citations