2023 ACM Conference on Fairness, Accountability, and Transparency 2023
DOI: 10.1145/3593013.3594078
|View full text |Cite
|
Sign up to set email alerts
|

“I’m fully who I am”: Towards Centering Transgender and Non-Binary Voices to Measure Biases in Open Language Generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(3 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…Notably, a substantial set of search results focuses on bias and discrimination. These are most typically related to LLMs, and include works such as van der Wal et al [227] and Huang et al [99], which address multiple biases and harmful stereotypes, and others, like Abid et al [1], Felkner et al [72], Ovalle et al [159], and Gadiraju et al [76] that concentrate on specific societal biases. The result set also reveals that papers that focus on specific or a limited number of risks and harms often scrutinize issues of unreliable performance of foundation models, or misinformation and propaganda.…”
Section: Mapping Individual Social and Biospheric Impacts Of Foundati...mentioning
confidence: 99%
“…Notably, a substantial set of search results focuses on bias and discrimination. These are most typically related to LLMs, and include works such as van der Wal et al [227] and Huang et al [99], which address multiple biases and harmful stereotypes, and others, like Abid et al [1], Felkner et al [72], Ovalle et al [159], and Gadiraju et al [76] that concentrate on specific societal biases. The result set also reveals that papers that focus on specific or a limited number of risks and harms often scrutinize issues of unreliable performance of foundation models, or misinformation and propaganda.…”
Section: Mapping Individual Social and Biospheric Impacts Of Foundati...mentioning
confidence: 99%
“…The last few years have witnessed and increasing attention toward (binary) gender bias in NLP (Sun et al, 2019;Stanczak and Augenstein, 2021;Savoldi et al, 2022a). Concurrently, emerging research has highlighted the importance of reshaping gender in NLP technologies in a more inclusive manner (Dev et al, 2021), also through the representation of non-binary identities in language (Lauscher et al, 2022;Ovalle et al, 2023). Foundational works in this area have included several applications, such as coreference resolution systems (Cao and Brandl et al, 2022) and fair rewriters (Vanmassenhove et al, 2021;Amrhein et al, 2023).…”
Section: Related Workmentioning
confidence: 99%
“…However, the widespread use of automated writing techniques without careful scrutiny can entail considerable risks. Recent studies have shown that Natural Language Generation (NLG) models are gender biased (Sheng et al, 2019(Sheng et al, , 2020Sheng et al, 2021a;Bender et al, 2021) and therefore pose a risk to harm minorities when used in sensitive applications (Sheng et al, 2021b;Ovalle et al, 2023a;Prates et al, 2018). Such biases might also infiltrate the application of automated reference letter generation and cause substantial societal harm, as research in social sciences (Madera et al, 2009;Khan et al, 2021) unveiled how biases in professional documents lead to diminished career opportunities for gender minority groups.…”
Section: Introductionmentioning
confidence: 99%