Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems 2020
DOI: 10.1145/3313831.3376860
|View full text |Cite
|
Sign up to set email alerts
|

An Experimental Study of Bias in Platform Worker Ratings: The Role of Performance Quality and Gender

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 15 publications
(10 citation statements)
references
References 42 publications
1
9
0
Order By: Relevance
“…In both studies, significant differences across experimental conditions were observed for female users but not males. This is consistent with other recent studies [36,24], where female and male Mechanical Turk Workers reacted differently to issues related to race and gender. This is an important takeaway for researchers who use Amazon Mechanical Turk.…”
Section: Female Vs Male Workers On Amazon Mechanical Turksupporting
confidence: 93%
See 2 more Smart Citations
“…In both studies, significant differences across experimental conditions were observed for female users but not males. This is consistent with other recent studies [36,24], where female and male Mechanical Turk Workers reacted differently to issues related to race and gender. This is an important takeaway for researchers who use Amazon Mechanical Turk.…”
Section: Female Vs Male Workers On Amazon Mechanical Turksupporting
confidence: 93%
“…Developing solutions to mitigate bias is currently a major challenge in AI [6,1,38,35,4]. Implicit gender and racial biases in technology is being studied by both AI and HCI researchers [34,38,35,24,36], and especially in the context of recidivism following ProPublica's report that COMPAS, a nationwide criminal risk assessment tool, contained racial disparities in its predictions [1,46,17,11,8,43,42].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Consequently, algorithms aimed at supporting decision processes, especially in high-risk contexts such as criminal justice, cannot be developed without taking into account the influences that institutional, behavioural, and social aspects have on the decisions [58]. Furthermore, human factors such as biases, preferences and deviating objectives can also influence the effectiveness of algorithm-supported decision making [40,48].…”
Section: Risk Assessment Instruments (Rai) For Criminal Recidivismmentioning
confidence: 99%
“…More recently, Jahanbakhsh et al [9] investigated the interaction of gender and performance on worker ratings in a simulated teamwork task on Amazon Mechanical Turk. They found that when male and female coworkers were equally low performing, the female worker received worse evaluations.…”
Section: Empirical Evidence Of Gender Biasmentioning
confidence: 99%