2022
DOI: 10.1609/aaai.v36i9.21182
|View full text |Cite
|
Sign up to set email alerts
|

Why Fair Labels Can Yield Unfair Predictions: Graphical Conditions for Introduced Unfairness

Abstract: In addition to reproducing discriminatory relationships in the training data, machine learning (ML) systems can also introduce or amplify discriminatory effects. We refer to this as introduced unfairness, and investigate the conditions under which it may arise. To this end, we propose introduced total variation as a measure of introduced unfairness, and establish graphical conditions under which it may be incentivised to occur. These criteria imply that adding the sensitive attribute as a feature removes the i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 20 publications
0
6
0
Order By: Relevance
“…This allows a Nash equilibrium (NE) [64] to be defined, which identifies outcomes of a game where every agent is simultaneously playing a best-response. 3 Definition 11 (48). A policy profile π is a Nash equilibrium (NE) in a MAID if, for every…”
Section: Multi-agent Influence Diagramsmentioning
confidence: 99%
See 1 more Smart Citation
“…This allows a Nash equilibrium (NE) [64] to be defined, which identifies outcomes of a game where every agent is simultaneously playing a best-response. 3 Definition 11 (48). A policy profile π is a Nash equilibrium (NE) in a MAID if, for every…”
Section: Multi-agent Influence Diagramsmentioning
confidence: 99%
“…Another important and useful application of incentives is for reasoning about fairness, a number of popular and influential definitions of which are based explicitly on causal frameworks [3,46,52,63,98]. Indeed, it can be shown that all optimal policies π * in a single-decision SCIM are counterfactually unfair [52] with respect to a protected attribute A (meaning that a change to the protected attribute would change the decision made) if and only if there is an RI on A [21].…”
Section: Fairnessmentioning
confidence: 99%
“…If X = x, then the recommendation to Alice is arbitrary and is independent of the signal s, which is only shown to hardworking Alice. Because the mediator only gives Alice her recommendation once her decision context Pa A is set, lazy Alice cannot know s. Therefore, in any situation, lazy Alice's action will match s with probability 1 2 . Consequently, when Bob is called to play (i.e., the decision context Pa B is set), and Alice's action matches s, Alice is twice as likely to be hardworking than lazy (so EU B = 20 3 for offering Alice a job rather than EU B = 6 for rejecting her).…”
Section: π) a Maid Correlated Equilibrium (Maid-ce) Is An Ne Of This ...mentioning
confidence: 99%
“…These concepts have been used to analyse the redirectability (Everitt et al 2021b;Holtman 2020) of AI systems, fairness (Everitt et al 2021a;Ashurst et al 2022), ambitiousness (Cohen, Vellambi, and Hutter 2020), and the safety of reward learning systems (Armstrong et al 2020;Everitt et al 2019;Langlois and Everitt 2021;Evans and Kasirzadeh 2021;Farquhar, Carey, and Everitt 2022). Typically, this analysis involves applying graphical criteria, that indicate which properties can or cannot occur in a given diagram, based on the graph structure alone.…”
Section: Introductionmentioning
confidence: 99%