2022
DOI: 10.1007/978-3-031-19778-9_33
|View full text |Cite
|
Sign up to set email alerts
|

FairStyle: Debiasing StyleGAN2 with Style Channel Manipulations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…Most editing techniques are tailored to Style-GAN, the state-of-the-art of 2D GANs [26][27][28][29]. Several editing techniques [10,18,19,25,34] build upon Style-GAN as it uses an intermediate disentangled latent space, usually referred to as w-space. Before editing, a latent space representation of the input image has to be recovered using a process typically referred to as Inversion or Projection [1,2,12,50].…”
Section: Related Workmentioning
confidence: 99%
“…Most editing techniques are tailored to Style-GAN, the state-of-the-art of 2D GANs [26][27][28][29]. Several editing techniques [10,18,19,25,34] build upon Style-GAN as it uses an intermediate disentangled latent space, usually referred to as w-space. Before editing, a latent space representation of the input image has to be recovered using a process typically referred to as Inversion or Projection [1,2,12,50].…”
Section: Related Workmentioning
confidence: 99%
“…The majority of research on the fairness and bias of GANs seeks to either discover and explain the biases or to remove the detrimental impact of using unbalanced data on generation outputs. The three primary areas of bias and fairness research are: discovering and explaining biases; debiasing pre-trained GANs; and improving the training and generation performance of GANs using biased datasets [5].…”
Section: Figure 1 General Architecture Of Generative Adversarial Networkmentioning
confidence: 99%
“…Why, or why not, certain portions of latent space respond to concepts in a consistent manner? Answering this question can shed insight on how to better manipulate semantics in GAN latent spaces, while also uncovering potential dataset biases learned by generative models [1,16].…”
Section: Introductionmentioning
confidence: 99%