Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society 2018
DOI: 10.1145/3278721.3278751
|View full text |Cite
|
Sign up to set email alerts
|

Modeling Epistemological Principles for Bias Mitigation in AI Systems

Abstract: Artificial Intelligence (AI) has been used extensively in automatic decision making in a broad variety of scenarios, ranging from credit ratings for loans to recommendations of movies. Traditional design guidelines for AI models focus essentially on accuracy maximization, but recent work has shown that economically irrational and socially unacceptable scenarios of discrimination and unfairness are likely to arise unless these issues are explicitly addressed. This undesirable behavior has several possible sourc… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(25 citation statements)
references
References 8 publications
0
25
0
Order By: Relevance
“…Another issue is that algorithms can obscure discrimination in ways that are unfair and unfamiliar (Barocas and Selbst, 2016). The AI software becomes more intelligent and exhibits more agency, but the predictive and decision-making processes used by algorithms are often opaque – it is difficult to explain why a particular decision was made (Dineen et al , 2004; Konradt et al , 2016; Vasconcelos et al , 2017; McCarthy et al , 2017). Also, when algorithms use non-work related data to make inferences about applicants’ age, race, religion and sex, it makes it difficult to determine if firms are adhering to federal laws that protect job applicants against discrimination (Vasconcelos et al , 2017; Valentino-DeVries, 2013).…”
Section: Design Justice At the Intersection Of Algorithmic Bias And Equitymentioning
confidence: 99%
See 1 more Smart Citation
“…Another issue is that algorithms can obscure discrimination in ways that are unfair and unfamiliar (Barocas and Selbst, 2016). The AI software becomes more intelligent and exhibits more agency, but the predictive and decision-making processes used by algorithms are often opaque – it is difficult to explain why a particular decision was made (Dineen et al , 2004; Konradt et al , 2016; Vasconcelos et al , 2017; McCarthy et al , 2017). Also, when algorithms use non-work related data to make inferences about applicants’ age, race, religion and sex, it makes it difficult to determine if firms are adhering to federal laws that protect job applicants against discrimination (Vasconcelos et al , 2017; Valentino-DeVries, 2013).…”
Section: Design Justice At the Intersection Of Algorithmic Bias And Equitymentioning
confidence: 99%
“…If the historical data have an implicit bias that favors white men over Latinos, for example, then the measure of a strong candidate may have a strong correlation to race, ethnicity and gender, even if the algorithm-designer has no intention of replicating past hiring decisions that marginalize groups of people based on these categories (Barocas and Selbst, 2016). IBM researchers (Vasconcelos et al , 2017) found that, in creating these algorithms, companies model historical patterns of hiring from data describing “high-performance” employees as a basis for selecting candidates with similar profiles. Consequently, biases rooted in the traditional hiring process are reproduced and encoded in the data used to train the new systems and can have drastic impacts on human lives (O’Neil, 2016; Kilpatrick, 2016; Giang 2018).…”
Section: Design Justice At the Intersection Of Algorithmic Bias And Equitymentioning
confidence: 99%
“…For instance, German et al [58] see code reviewing as a decision process where codes from different categories of population might be more or less often accepted, Rahman et al and Bird et al [25,145] point out that bug-fix datasets are biased due to historical decisions of the engineers producing data samples. Other papers such as [16,22,24,61,80,136,165,189] reflect on how projects (data science process, creation of fairness definitions) are conducted and how unfairness is seen and might arise in general from the problem formulation perspective.…”
Section: "Fair" Software Engineeringmentioning
confidence: 99%
“…By showing the outcomes of a large number of decisions made by AI that are based on demonstrated individual decisions, it can become visible and how the totality of incremental bias results in harmful treatment. Furthermore, when explicit processes are put into place to examine datasets before AI uses them to develop a decision function, data mining facilitated by AI can also help to mitigate bias if it is identified as being present in the dataset (Vasconcelos et al 2018). If using AI can help us see who we are and what we are becoming, it offers a chance to examine whether this is who we want to be.…”
Section: Ai As a Tool To Improve Ourselvesmentioning
confidence: 99%