2021
DOI: 10.5210/spir.v2021i0.12240
|View full text |Cite
|
Sign up to set email alerts
|

Which Human Faces Can an Ai Generate? Lack of Diversity in This Person Does Not Exist

Abstract: In this abstract we show the results of an interdisciplinary research in which we audit fake human faces generated by the website This Person Does Not Exist (TPDNE), and discuss how this system can help perpetuate normativities supported by a dependency on a limited database. Our analysis is centered on the “default generic face” that we created by overlapping random samples of fake human faces generated by TPDNE's algorithms – a version of Generative Adversarial Network… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Text-to-image generation models, in particular, have the potential to extend image-editing capabilities and lead to the development of new tools for creative practitioners. On the other hand, generative methods can be leveraged for malicious purposes, including harassment and misinformation spread [20], and raise many concerns regarding social and cultural exclusion and bias [67,62,68]. These considerations inform our decision to not to release code or a public demo.…”
Section: Conclusion Limitations and Societal Impactmentioning
confidence: 99%
“…Text-to-image generation models, in particular, have the potential to extend image-editing capabilities and lead to the development of new tools for creative practitioners. On the other hand, generative methods can be leveraged for malicious purposes, including harassment and misinformation spread [20], and raise many concerns regarding social and cultural exclusion and bias [67,62,68]. These considerations inform our decision to not to release code or a public demo.…”
Section: Conclusion Limitations and Societal Impactmentioning
confidence: 99%
“…Instead of looking at these results as failures, we rather suggest to embrace those 'anticipated' or 'biased' outputs. As these models are trained on real-world datasets and thus incorporate human biases [67,72], they can help to surface, identify, and confront existing assumptions and preconceptions. As such, using generative AI models can be an effective approach to make robot and social stereotypes visible [2] in order to then challenge them through designerly action.…”
Section: Discussionmentioning
confidence: 99%