2021
DOI: 10.1007/978-3-030-86520-7_22
|View full text |Cite
|
Sign up to set email alerts
|

The KL-Divergence Between a Graph Model and its Fair I-Projection as a Fairness Regularizer

Abstract: Learning and reasoning over graphs is increasingly done by means of probabilistic models, e.g. exponential random graph models, graph embedding models, and graph neural networks. When graphs are modeling relations between people, however, they will inevitably reflect biases, prejudices, and other forms of inequity and inequality. An important challenge is thus to design accurate graph modeling approaches while guaranteeing fairness according to the specific notion of fairness that the problem requires. Yet, pa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(7 citation statements)
references
References 22 publications
0
7
0
Order By: Relevance
“…See Guedj [14] for details. 6 Theorem 1 also holds when we substitute L Ī³/2 m (h) and L Ī³/2 m (Q) as L Ī³ m (h) and L Ī³ m (Q) respectively. But we state the theorem in this form to ease the development of the later analysis.…”
Section: General Pac-bayesian Theorems For Subgroup Generalizationmentioning
confidence: 73%
See 1 more Smart Citation
“…See Guedj [14] for details. 6 Theorem 1 also holds when we substitute L Ī³/2 m (h) and L Ī³/2 m (Q) as L Ī³ m (h) and L Ī³ m (Q) respectively. But we state the theorem in this form to ease the development of the later analysis.…”
Section: General Pac-bayesian Theorems For Subgroup Generalizationmentioning
confidence: 73%
“…A major factor that affects the generalization bound ( 6) is m , the distance from the target subgroup V m to the training set V 0 . The generalization bound (6) suggests that there is a better generalization guarantee for subgroups that are closer to the training set. In other words, it is unfair for subgroups that are far away from the training set.…”
Section: Implications For Fairness Of Graph Neural Networkmentioning
confidence: 99%
“…Additionally, regularization based on the network topology is also widely employed in link prediction. As an example, Buyl et al [24] proposed a fairness regularization term for the link prediction task. They first defined a set of probabilistic graph models that are fair w.r.t.…”
Section: Improving Group Fairnessmentioning
confidence: 99%
“…We collect these open-source datasets in https://github.com/ yushundong/Graph-Mining-Fairness-Data. [5], [24], [37], [39], [44], [73], [78], [80] [83], [87], [101], [128], [146] 7), [24], [25], [54], [114] occupation (21) MovieLens-1M [62] group, individual, 10,000 1,000,000 11 gender (2), age (7), [1], [18], [21], [33], [46], [54] popularity, provider, occupation (21) [71], [76], [97], [99], [105] social, user [159], [164], [171], [174], [175] MovieLens-20M [62] popularity 165,000 20,000,000 6 - [84], [157] C3…”
Section: Benchmark Datasetsmentioning
confidence: 99%
“…Finally Fairness Regularizer (FIPR) [12] generalizes the idea of DeBayes by proposing a regularization term to encourage fair link prediction that can be applied with any probabilistic network models. This regularizer is defined by the KL-divergence between the probabilistic node embedding model and its I-projection.…”
Section: P(š‘mentioning
confidence: 99%