Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume 2021
DOI: 10.18653/v1/2021.eacl-main.53
|View full text |Cite
|
Sign up to set email alerts
|

Cross-lingual Entity Alignment with Incidental Supervision

Abstract: Much research effort has been put to multilingual knowledge graph (KG) embedding methods to address the entity alignment task, which seeks to match entities in different languagespecific KGs that refer to the same real-world object. Such methods are often hindered by the insufficiency of seed alignment provided between KGs. Therefore, we propose an incidentally supervised model, JEANS , which jointly represents multilingual KGs and text corpora in a shared embedding scheme, and seeks to improve entity alignmen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 28 publications
(11 citation statements)
references
References 69 publications
0
11
0
Order By: Relevance
“…For cross-lingual word embeddings, most of works rely on aligned words or sentences (Ruder, Vulić, and Søgaard 2019). Cao et al 2018b;Pan et al 2019;Chen et al 2021…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…For cross-lingual word embeddings, most of works rely on aligned words or sentences (Ruder, Vulić, and Søgaard 2019). Cao et al 2018b;Pan et al 2019;Chen et al 2021…”
Section: Related Workmentioning
confidence: 99%
“…In fact, the knowledge conveys similar semantic concepts and similar meanings across languages (Vulić and Moens 2013;Chen et al 2021), which is essential to achieve crosslingual transferability. Therefore, how to equip pre-trained models with knowledge has become an underexplored but critical challenge for multilingual language models.…”
Section: Introductionmentioning
confidence: 99%
“…First, side information may not be available due to privacy concerns, especially for industrial applications [23,25,26]. Second, models that incorporating machine translation or prealigned word embeddings may be overestimated due to the name bias issue [5,21,22,28]. Thus, compared with the models employing side information, the structure-only methods are more general and not affected by bias of benchmarks.…”
Section: Related Workmentioning
confidence: 99%
“…We compare our method against the following three groups of advanced EA methods: (1) Structure: These methods only use the structure information (i.e., triples): GCN-Align (Wang et al, 2018), MuGNN (Cao et al, 2019a), BootEA (Sun et al, 2018), MRAEA (Mao et al, 2020), JEANS (Chen et al, 2021). ( 2) Word-level: These methods average the pre-trained entity name vectors to construct the initial features: GM-Align , RDGCN (Wu et al, 2019a), HGCN (Wu et al, 2019b), DAT (Zeng et al, 2020b), DGMC (Fey et al, 2020).…”
Section: Baselinesmentioning
confidence: 99%