2021 IEEE International Conference on Data Mining (ICDM) 2021
DOI: 10.1109/icdm51629.2021.00194
|View full text |Cite
|
Sign up to set email alerts
|

A Multi-view Confidence-calibrated Framework for Fair and Stable Graph Representation Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
47
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(47 citation statements)
references
References 8 publications
0
47
0
Order By: Relevance
“…Our degree fairness is defined in a group-wise manner, which is in line with existing fairness definitions (Zafar et al 2017;Agarwal, Lakkaraju, and Zitnik 2021) and offers a flexible setup. On the one hand, a simple and practical scenario could involve just two groups, since typically the degree fairness issue is the most serious between nodes with the smallest and largest degrees.…”
Section: Problem Formulationmentioning
confidence: 99%
“…Our degree fairness is defined in a group-wise manner, which is in line with existing fairness definitions (Zafar et al 2017;Agarwal, Lakkaraju, and Zitnik 2021) and offers a flexible setup. On the one hand, a simple and practical scenario could involve just two groups, since typically the degree fairness issue is the most serious between nodes with the smallest and largest degrees.…”
Section: Problem Formulationmentioning
confidence: 99%
“…Recently, there has been a line of works [3], [104], [173] that extend counterfactual fairness from traditional i.i.d. data to graph data.…”
Section: Counterfactual Fairnessmentioning
confidence: 99%
“…To measure counterfactual fairness on graphs, recent works [3], [173] usually adopt Unfairness Score, which is the percentage of nodes whose predicted label changes when their sensitive feature values are changed (while other features are fixed). Beyond that, Ma et al [104] also proposed to evaluate graph counterfactual fairness by measuring the average prediction discrepancy between any two different versions of counterfactual sensitive feature assignment on all the n nodes.…”
Section: Counterfactual Fairnessmentioning
confidence: 99%
See 1 more Smart Citation
“…Most defense mechanisms focus on mitigating adversarial attacks by e.g. graph sanitization [53], adversarial training [12], [59], and certification of robustness [6]. On the other hand, to defend against privacy attacks, one of the most popular proposed countermeasures is differential privacy (DP) [14].…”
Section: Defense Of Attacks On Gnnsmentioning
confidence: 99%