2021
DOI: 10.1109/mts.2021.3056293
|View full text |Cite
|
Sign up to set email alerts
|

Bias and Discrimination in AI: A Cross-Disciplinary Perspective

Abstract:  Operating at a large scale and impacting large groups of people, automated systems can make consequential and sometimes contestable decisions. Automated decisions can impact a range of phenomena, from credit scores to insurance payouts to health evaluations. These forms of automation can become problematic when they place certain groups or people at a systematic disadvantage. These are cases of discrimination-which is legally defined as the unfair or unequal treatment of an individual (or group) based on cer… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
45
0
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 143 publications
(47 citation statements)
references
References 24 publications
0
45
0
2
Order By: Relevance
“…(5) Risk of injustice and gender discrimination: water, gender, and AI are inextricably linked, as the digitalisation of water will likely enhance existing power imbalances in the sector and the lack of representation of women and other gendered identities in water-related interventions. To counteract the bias incorporated in AI applications see also [58], gender should be mainstreamed in planning, implementation, and evaluation of programs. Methodologies such as gender analysis, social mapping, and sexand gender-disaggregated data should be further encouraged.…”
Section: Discussion: a Hydrosocial Approach To Digital Watermentioning
confidence: 99%
“…(5) Risk of injustice and gender discrimination: water, gender, and AI are inextricably linked, as the digitalisation of water will likely enhance existing power imbalances in the sector and the lack of representation of women and other gendered identities in water-related interventions. To counteract the bias incorporated in AI applications see also [58], gender should be mainstreamed in planning, implementation, and evaluation of programs. Methodologies such as gender analysis, social mapping, and sexand gender-disaggregated data should be further encouraged.…”
Section: Discussion: a Hydrosocial Approach To Digital Watermentioning
confidence: 99%
“…Empirical findings have shown that data-driven methods can unintentionally encode existing human biases and introduce new ones [18]. According to [19], three causes of biases are:…”
Section: Causes Of Bias In Ai Systemsmentioning
confidence: 99%
“…While no consistent definition of fairness is yet established and the perception of fairness can vary across consumers [63], fairness is often considered as the absence of any bias, prejudice, favoritism, mistreatment toward individuals, group, classes, or social categories based on their inherent or acquired characteristics [10]. Often, fairness and unfairness are also related to the problem of (digital) discrimination [25,28], which is often characterized as an unfair or unequal treatment of individuals, groups, classes or social categories according to certain characteristics. Discrimination is another phenomenon, which may be reinforced by recommender systems, in particular when they operate on data that have inherent biases.…”
Section: User Modeling Personalization and Engagementmentioning
confidence: 99%