2023
DOI: 10.48550/arxiv.2302.03675
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Auditing Gender Presentation Differences in Text-to-Image Models

Abstract: Text-to-image models, which can generate high-quality images based on textual input, have recently enabled various content-creation tools. Despite significantly affecting a wide range of downstream applications, the distributions of these generated images are still not fully understood, especially when it comes to the potential stereotypical attributes of different genders. In this work, we propose a paradigm (Gender Presentation Differences) that utilizes fine-grained self-presentation attributes to study how… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 18 publications
0
4
0
Order By: Relevance
“…These systems were chosen because they are popular implementations of different state-of-the-art text-to-image generative AI techniques using diffusion models [12] and CLIP image embeddings [13] that were developed primarily by researchers and academics. These models have also been studied in academic research for the content nature, and cultural and social biases of their generated outputs [14,15,16,17,18,19,20]. For this experiment, Stable Diffusion's v1-4 and v2-1 pretrained weights will each be used independently in conjunction with Stable Diffusion Web UI.…”
Section: Methodology For Interviewsmentioning
confidence: 99%
“…These systems were chosen because they are popular implementations of different state-of-the-art text-to-image generative AI techniques using diffusion models [12] and CLIP image embeddings [13] that were developed primarily by researchers and academics. These models have also been studied in academic research for the content nature, and cultural and social biases of their generated outputs [14,15,16,17,18,19,20]. For this experiment, Stable Diffusion's v1-4 and v2-1 pretrained weights will each be used independently in conjunction with Stable Diffusion Web UI.…”
Section: Methodology For Interviewsmentioning
confidence: 99%
“…Secondly, Stable-Diffusion V2.1 (Rombach et al 2022) is utilized to generate ten images per prompt resulting in 5k images. To address any potential bias (Zhang et al 2023;Bakr et al 2023) in the generated images, a human evaluation was conducted to filter out the non-agnostic images based on two simple questions; 1) Do you recognize a human in the scene? 2) If yes, Are the gender and race anonymous?…”
Section: Which Metric Is Better?mentioning
confidence: 99%
“…Recent efforts focus on estimating model bias, driven by the fact that more than balanced data is needed to create unbiased models (Wang et al 2019). Bias (Zhang et al 2023;Bolukbasi et al 2016;Caliskan, Bryson, and Narayanan 2017) is characterized by the model's representation of different subgroups when generating the supergroup, such as assessing whether it equally depicts men and women in images of people. The primary cause of bias stems from spurious correlations captured during training.…”
Section: Introductionmentioning
confidence: 99%
“…While there is a broad spectrum of genders [23], it's difficult to accurately identify someone's gender across this broad spectrum based solely on visual cues. Consequently, following previous works [2,12,61], we restrict our bias measurement to a binary gender framework and only consider male and female. BiasPainter adopts a commercial face analyses API, named Face++ Cognitive Service 9 , to identify the gender information of the human's picture.…”
Section: Properties Assessmentmentioning
confidence: 99%