2020
DOI: 10.48550/arxiv.2006.16309
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial Learning for Debiasing Knowledge Graph Embeddings

Abstract: Knowledge Graphs (KG) are gaining increasing attention in both academia and industry. Despite their diverse benefits, recent research have identified social and cultural biases embedded in the representations learned from KGs. Such biases can have detrimental consequences on different population and minority groups as applications of KG begin to intersect and interact with social spheres. This paper describes our work-in-progress which aims at identifying and mitigating such biases in Knowledge Graph (KG) embe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 13 publications
0
5
0
Order By: Relevance
“…Learning. Another typical technique to promote fairness is to take advantage of adversary learning [10,13,17,25,59,62,64,176,197]. The basic idea of mitigating unfairness through adversary learning is to learn fair representations through a min-max game between the main task predictor and an adversarial classifier.…”
Section: Adversarymentioning
confidence: 99%
“…Learning. Another typical technique to promote fairness is to take advantage of adversary learning [10,13,17,25,59,62,64,176,197]. The basic idea of mitigating unfairness through adversary learning is to learn fair representations through a min-max game between the main task predictor and an adversarial classifier.…”
Section: Adversarymentioning
confidence: 99%
“…Many scholars from various disciplines have been pointing out that algorithms and particularly ML are biased, unfair and lack in transparency and accountability ( [1][2][3][4][5][6][7][8][9][10][11][12]). To deal with these serious issues, there is important research proceeding in the AI/ML community, particularly in computer science, data science and related disciplines aiming at developing approaches for debiasing and improving FAT ( [3,4,[12][13][14][15]). Correspondingly, there is a growing research community dealing with these issues.…”
Section: Why Fairness Accountability and Transparency Are Not Enoughmentioning
confidence: 99%
“…While it is obviously relevant to tackle these issues of data quality and technical bias, there is also a need for approaches to address the societal and ethical issues of AI. Several scholars thus argue that bias is not merely a technical issue ( [1,[3][4][5][6][11][12][13][14][15][16][17]). The complexity of the problem is already observable in the various different, and partially contradictory notions and definitions of fairness ( [5,11]).…”
Section: Why Fairness Accountability and Transparency Are Not Enoughmentioning
confidence: 99%
“…The issue of biased training data has become an area of great interest within the KG field. Typically, the bias of concern is that of attributes associated with the entities in the graph, for example gender, age or race [47,3,8]. However, Recent work has shown how popularity bias is present in three frequently used non-biomedical KGs: FB15K, WN18 and YAGO3-10 [27].…”
Section: Previous Workmentioning
confidence: 99%
“…The issue of non-uniform graph connectivity (typically in homogenous graphs) has begun to be studied in parallel by the field of Graph Neural Networks (GNN), where researchers have shown that models learn low-quality representations, thus making more incorrect predictions, for low-degree vertices [26,25,44]. This has also been explored in the context of homogenous graph representation learning [3] and for random walks [23,36].…”
Section: Previous Workmentioning
confidence: 99%