Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence 2019
DOI: 10.24963/ijcai.2019/740
|View full text |Cite
|
Sign up to set email alerts
|

Neural Collective Entity Linking Based on Recurrent Random Walk Network Learning

Abstract: Benefiting from the excellent ability of neural networks on learning semantic representations, existing studies for entity linking (EL) have resorted to neural networks to exploit both the local mentionto-entity compatibility and the global interdependence between different EL decisions for target entity disambiguation. However, most neural collective EL methods depend entirely upon neural networks to automatically model the semantic dependencies between different EL decisions, which lack of the guidance from … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
4
1

Relationship

2
7

Authors

Journals

citations
Cited by 43 publications
(31 citation statements)
references
References 0 publications
0
31
0
Order By: Relevance
“…Typically, Cui et al (2018) proposed ATTOrderNet that uses self-attention mechanism to learn sentence representations. Inspired by the successful applications of graph neural network (GNN) in many NLP tasks Xue et al, 2019;, Yin et al (2019Yin et al ( , 2021 represented input sentences with a unified SE-Graph and then applied GRN to learn sentence representations. Very recently, we notice that Chowdhury et al ( 2021) proposes a BART-based sentence ordering model.…”
Section: Related Workmentioning
confidence: 99%
“…Typically, Cui et al (2018) proposed ATTOrderNet that uses self-attention mechanism to learn sentence representations. Inspired by the successful applications of graph neural network (GNN) in many NLP tasks Xue et al, 2019;, Yin et al (2019Yin et al ( , 2021 represented input sentences with a unified SE-Graph and then applied GRN to learn sentence representations. Very recently, we notice that Chowdhury et al ( 2021) proposes a BART-based sentence ordering model.…”
Section: Related Workmentioning
confidence: 99%
“…"-" in the column of Algorithms means the corresponding model uses a relatively simple algorithm to select the mapping entity, such as a linear combination of features. [42] pre-trained description, type -Deep-ED (EMNLP 2017) [43] pre-trained description, context MLP NeuPL (CIKM 2017) [44] learned description, context -Eshel et al (CoNLL 2017) [45] learned context MLP MR-Deep-ED (ACL 2018) [46] pre-trained description, context MLP Moon et al (ACL 2018) [47] pre-trained context - [51] pre-trained description, context MLP SGTB-BiBSG (NAACL 2018) [52] pre-trained description, context -NCEL (COLING 2018) [53] learned context MLP Le and Titov (ACL 2019) [54] pre-trained type MLP Le and Titov (ACL 2019) [55] pre-trained [57] description, context MLP RRWEL (IJCAI 2019) [58] learned surface form, description graph-based RLEL (WWW 2019) [59] pre-trained description, context RL DCA (EMNLP 2019) [60] pre-trained surface form, description, context MLP, RL Gillick et al (CoNLL 2019) [61] pre-trained description MLP E-ELMo (arXiv 2019) [62] learned context MLP FGS2EE (ACL 2020) [63] pre-trained description, context, type MLP ET4EL (AAAI 2020) [64] learned -Chen et al (AAAI 2020) [65] pre-trained description, context, type MLP REL (SIGIR 2020) [66] learned context MLP SeqGAT (WWW 2020) [67] description MLP DGCN (WWW 2020) [31] description, context, type MLP BLINK (EMNLP 2020) [68] description -ELQ (EMNLP 2020) [69] description -GNED (KBS 2020) [70] pre-trained description, context MLP JMEL (ECIR 2020) [71] learned MLP Yamada et al (arXiv 2020) [72] context -M3 (AAAI 2021) [73] -Bi-MPR (AAAI 2021) [74] description MLP Chen et al (AAAI 2021) [75] learned surface form MLP CHOLAN (EACL 2021) [76] -Zhang et al (DASFAA 2021)…”
Section: Word Embeddingmentioning
confidence: 99%
“…In this case, many DL based EL methods learn a domainspecific word embedding using some embedding techniques. DL based EL methods [33], [37], [38], [39], [40], [41], [44], [45], [48], [50], [53], [58], [66], [78] learned word embeddings via Word2Vec based on the huge corpora such as Wikipedia. Word2Vec contains continuous bag-of-words (CBOW) model and skip-gram (SG) model [17].…”
Section: Learnedmentioning
confidence: 99%
“…To alleviate this problem, global EL models jointly optimize the entire linking configuration. The key idea is to maximize a global coherence/similarity score between all linked entities (Hoffart et al, 2011;Ratinov et al, 2011;Cheng and Roth, 2013;Nguyen et al, 2014;Alhelbawy and Gaizauskas, 2014;Pershina et al, 2015;Guo and Barbosa, 2016;Globerson et al, 2016;Ganea and Hofmann, 2017;Le and Titov, 2018;Fang et al, 2019;Xue et al, 2019). Despite of its significant improvement in accuracy, such global methods suffer from high complexity.…”
Section: Related Workmentioning
confidence: 99%