2018
DOI: 10.5325/jinfopoli.8.1.0078
|View full text |Cite
|
Sign up to set email alerts
|

How Algorithms Discriminate Based on Data They Lack: Challenges, Solutions, and Policy Implications

Abstract: Organizations often employ data-driven models to inform decisions that can have a significant impact on people's lives (e.g., university admissions, hiring). In order to protect people's privacy and prevent discrimination, these decision-makers may choose to delete or avoid collecting social category data, like sex and race. In this article, we argue that such censoring can exacerbate discrimination by making biases more difficult to detect. We begin by detailing how computerized decisions can lead to biases i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 21 publications
(11 citation statements)
references
References 59 publications
0
10
0
1
Order By: Relevance
“…These individuals represent a distinct demographic (wealthy owners of premium houses), whose energy use is likely signi cantly different to other groups (such as renters or apartment dwellers). This biases in our data will have predictable as well as unknowable consequences that must to be acknowledged and ought to caution against generalisation 44 .…”
Section: Discussionmentioning
confidence: 93%
“…These individuals represent a distinct demographic (wealthy owners of premium houses), whose energy use is likely signi cantly different to other groups (such as renters or apartment dwellers). This biases in our data will have predictable as well as unknowable consequences that must to be acknowledged and ought to caution against generalisation 44 .…”
Section: Discussionmentioning
confidence: 93%
“…Moreover, a well-known problem is that reliance on algorithms may introduce various kinds of bias, including discrimination based on gender, race, or religion (Buolamwini and Gebru, 2018). Algorithms learn from the data that are fed to them, including the prejudices associated with different kinds of groups (Williams et al, 2018). If the output of the models are taken at face value, decisions are likely to reproduce existing biases in society and reinforce discrimination patterns under the appearance of objectivity.…”
Section: Big Data and Algorithmic Decision-makingmentioning
confidence: 99%
“…In this specific example, the algorithm might create a health risk factor based on the viewing of television programs! To combat this phenomenon, [Williams et al, 2018] show that the collection and use of protected variables should not be prohibited, but rather used as a means of monitoring non-discrimination instead. But this is not easily done either, as will be shown below.…”
Section: Interpretability Vs Accuracy: the Opaque Effectiveness Of Ne...mentioning
confidence: 99%