2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00386
|View full text |Cite
|
Sign up to set email alerts
|

Retrieve in Style: Unsupervised Facial Feature Transfer and Retrieval

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(10 citation statements)
references
References 26 publications
0
10
0
Order By: Relevance
“…ipants 4 . As can be seen from the results in Table 2, our method showed significantly better diversity than [34] with a p-value of < 0.0001.…”
Section: Comparison With Supervised Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…ipants 4 . As can be seen from the results in Table 2, our method showed significantly better diversity than [34] with a p-value of < 0.0001.…”
Section: Comparison With Supervised Methodsmentioning
confidence: 99%
“…Given a reference image and a target image, they exchange style codes based on regional differences to transfer the appearance of an object. [4] improves upon [5] by finding more successful image-specific manipulation directions and eliminating the per-image matching overhead. [20] clusters the feature maps to find meaningful and interpretable semantic classes that can be used to create segmentation masks.…”
Section: Related Workmentioning
confidence: 99%
“…To reduce supervision, others have explored both unsupervised approaches [8,11,26,61,70] and self-supervised approaches [29,53,69]. To achieve more fine-grained control, many works have explored the mixing of latent codes [14,16,30,62] and localbased editing via semantic maps [42,82] or reference images [37,40]. Finally, to achieve text-based editing some have leveraged powerful contrast language-image (CLIP) models [3,9,12,21,48].…”
Section: A2 Editing Images With Stylegan2mentioning
confidence: 99%
“…person identity). Hou et al [2022], , Tewari et al [2020a,b], focus on searching latent space to find latent codes corresponding to global meaningful manipulations, while Chong et al [2021] utilizes semantic segmentation maps to locate and mix certain positions of style codes to achieve editing goals.…”
Section: Stylegan-based Editingmentioning
confidence: 99%