Proceedings of the First ACL Workshop on Ethics in Natural Language Processing 2017
DOI: 10.18653/v1/w17-1602
|View full text |Cite
|
Sign up to set email alerts
|

These are not the Stereotypes You are Looking For: Bias and Fairness in Authorial Gender Attribution

Abstract: Stylometric and text categorization results show that author gender can be discerned in texts with relatively high accuracy. However, it is difficult to explain what gives rise to these results and there are many possible confounding factors, such as the domain, genre, and target audience of a text. More fundamentally, such classification efforts risk invoking stereotyping and essentialism. We explore this issue in two datasets of Dutch literary novels, using commonly used descriptive (LIWC, topic modeling) an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
24
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 38 publications
(28 citation statements)
references
References 27 publications
2
24
0
Order By: Relevance
“…Tatman (2017) investigates the impact of gender and dialect on deployed speech recognition systems, while introduce a method to reduce amplification effects on models trained with gender-biased datasets. Koolen and van Cranenburgh (2017) examine the relationship between author gender and text attributes, noting the potential for researcher interpretation bias in such studies. Both Larson (2017) and Koolen and van Cranenburgh (2017) offer guidelines to NLP researchers and computational social scientists who wish to predict gender as a variable.…”
Section: Related Workmentioning
confidence: 99%
“…Tatman (2017) investigates the impact of gender and dialect on deployed speech recognition systems, while introduce a method to reduce amplification effects on models trained with gender-biased datasets. Koolen and van Cranenburgh (2017) examine the relationship between author gender and text attributes, noting the potential for researcher interpretation bias in such studies. Both Larson (2017) and Koolen and van Cranenburgh (2017) offer guidelines to NLP researchers and computational social scientists who wish to predict gender as a variable.…”
Section: Related Workmentioning
confidence: 99%
“…Human judges show surprisingly inferior performance on user profiling tasks, grounding their judgement in topical stereotypes (Carpenter et al, 2017). However, albeit more accurate thanks to capturing stylistic variation elements, statistical models are prone to stereotype propagation as well (Costa-jussà et al, 2019;Koolen and van Cranenburgh, 2017).…”
Section: User Traits and Nlp Modelsmentioning
confidence: 99%
“…For example, gender is usually represented as a binary variable in NLP; computational models built on this foundation risk learning gender-stereotypical patterns. For this reason, a growing line of research has sought new ways to operationalize gender in NLP (Bamman et al, 2014a ; Nguyen et al, 2014 ; Koolen and van Cranenburgh, 2017 ).…”
Section: Operationalizationmentioning
confidence: 99%