2018
DOI: 10.48550/arxiv.1811.05577
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Aequitas: A Bias and Fairness Audit Toolkit

Abstract: Recent work has raised concerns on the risk of unintended bias in AI systems being used nowadays that can affect individuals unfairly based on race, gender or religion, among other possible characteristics. While a lot of bias metrics and fairness definitions have been proposed in recent years, there is no consensus on which metric/definition should be used and there are very few available resources to operationalize them. Therefore, despite recent awareness, auditing for bias and fairness when developing and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
79
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 78 publications
(89 citation statements)
references
References 10 publications
0
79
0
1
Order By: Relevance
“…On the other hand, TT3 applied mean-imputation and TT4 applied both median-and mode-imputation using df.fillna(), which exhibits fairness compared to data removal. While our findings suggest that removing data items with MV introduces bias, the most popular fairness tools AIF 360 [8], Aequitas [58], Themis-ML [6] ignore these data items and remove entire row/column. Our evaluation strategy confirms that the tools can integrate existing imputation methods [62] in the pipeline and allow users to choose appropriate ones.…”
Section: Fairness Analysis Of Stagesmentioning
confidence: 66%
“…On the other hand, TT3 applied mean-imputation and TT4 applied both median-and mode-imputation using df.fillna(), which exhibits fairness compared to data removal. While our findings suggest that removing data items with MV introduces bias, the most popular fairness tools AIF 360 [8], Aequitas [58], Themis-ML [6] ignore these data items and remove entire row/column. Our evaluation strategy confirms that the tools can integrate existing imputation methods [62] in the pipeline and allow users to choose appropriate ones.…”
Section: Fairness Analysis Of Stagesmentioning
confidence: 66%
“…In this sub-section, we present some of the related work that deals with maintaining as well as investigating the fairness of data in the AI domain and are listed in [20][21][22][23][24][25][26][27].…”
Section: Related Workmentioning
confidence: 99%
“…The framework is tested on the problem of fairly predicting the acceptance of law students. In addition, in [26,27], an AI-based real-time toolkit for fairness assurance is presented.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations