2019
DOI: 10.1147/jrd.2019.2942287
|View full text |Cite
|
Sign up to set email alerts
|

AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias

Abstract: Fairness is an increasingly important concern as machine learning models are used to support decision making in high-stakes applications such as mortgage lending, hiring, and prison sentencing. This paper introduces a new open source Python toolkit for algorithmic fairness, AI Fairness 360 (AIF360), released under an Apache v2.0 license (https://github.com/ibm/aif360). The main objectives of this toolkit are to help facilitate the transition of fairness research algorithms to use in an industrial setting and t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
381
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 591 publications
(445 citation statements)
references
References 18 publications
5
381
0
2
Order By: Relevance
“…A key feature of WIT is that it can calculate ML fairness metrics on trained models. Many current tools offer a similar capability: IBM AI Fairness 360 [7], Audit AI [26], and GAMut [14]. A distinguishing element of our tool is its ability to interactively apply optimization procedures to make posttraining classification threshold adjustments to improve those metrics.…”
Section: Model Understanding Frameworkmentioning
confidence: 99%
“…A key feature of WIT is that it can calculate ML fairness metrics on trained models. Many current tools offer a similar capability: IBM AI Fairness 360 [7], Audit AI [26], and GAMut [14]. A distinguishing element of our tool is its ability to interactively apply optimization procedures to make posttraining classification threshold adjustments to improve those metrics.…”
Section: Model Understanding Frameworkmentioning
confidence: 99%
“…Algorithms which belong to the "pre-processing" family ensure that the input data is fair. This can be achieved by suppressing the sensitive attributes, by changing class labels of the data set, and by reweighting or resampling the data [11,12,13].…”
Section: B Related Workmentioning
confidence: 99%
“…Ethical issues in machine learning are being studied by researchers in both academia (X. Jiang, Sun, Yang, Zhuge, & Yao, ; Katell, ; Lepri, Oliver, Letouzé, Pentland, & Vinck, ; Nikolov, Lalmas, Flammini, & Menczer, ; Wilkie & Azzopardi, ) and industry , (Bellamy et al, ). The ubiquity of machine learning applications is a reason for the increasing concern.…”
Section: Related Workmentioning
confidence: 99%