Proceedings of the International Workshop on Software Fairness 2018
DOI: 10.1145/3194770.3194775
|View full text |Cite
|
Sign up to set email alerts
|

Model-based discrimination analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 14 publications
0
4
0
Order By: Relevance
“…Such methods should be integrated into existing data debugging tools such as Dagger [114]. To our knowledge, only two data-focused systems have been implemented towards this goal-by Ruggieri et al [153] for discrimination discovery, by Ramadan et al [147] to uncover unwanted data dependencies through data flow analysis-and a few frameworks (e.g. AIF360 [20], Aequitas [155]) with fairness metrics and algorithmic mitigation methods not integrated into the entire system lifecycle.…”
Section: Framework and Toolsmentioning
confidence: 99%
“…Such methods should be integrated into existing data debugging tools such as Dagger [114]. To our knowledge, only two data-focused systems have been implemented towards this goal-by Ruggieri et al [153] for discrimination discovery, by Ramadan et al [147] to uncover unwanted data dependencies through data flow analysis-and a few frameworks (e.g. AIF360 [20], Aequitas [155]) with fairness metrics and algorithmic mitigation methods not integrated into the entire system lifecycle.…”
Section: Framework and Toolsmentioning
confidence: 99%
“…However, so far there exists no approach that allows modeling fairness requirements and detecting conflicts between them and other data-protection requirements as early as during the design phase of the targeted system. The need for integrating fairness in the early design stages is first has been highlighted by our work in [57,58].…”
Section: Fairness-aware Approachesmentioning
confidence: 99%
“…Deep neural networks (DNNs) [38] have been increasingly adopted in many fields, including computer vision [5], natural language processing [19], software engineering [13,18,32,39,45,48], etc. However, one of the crucial factors hindering DNNs from further serving applications with social impact is the unintended individual discrimination [44,47,55]. Individual discrimination exists when a given instance different from another only in sensitive attributes (e.g., gender, race, etc.)…”
Section: Introductionmentioning
confidence: 99%