2020
DOI: 10.1007/978-3-030-45439-5_31
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal Entity Linking for Tweets

Abstract: In many information extraction applications, entity linking (EL) has emerged as a crucial task that allows leveraging information about named entities from a knowledge base. In this paper, we address the task of multimodal entity linking (MEL), an emerging research field in which textual and visual information is used to map an ambiguous mention to an entity in a knowledge base (KB). First, we propose a method for building a fully annotated Twitter dataset for MEL, where entities are defined in a Twitter KB. T… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
36
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 40 publications
(55 citation statements)
references
References 47 publications
0
36
0
Order By: Relevance
“…Linking identities across social media platforms is beyond the scope of this work, but the interested reader is referred toAdjali et al (2020) for a recent contribution to the subject.…”
mentioning
confidence: 99%
“…Linking identities across social media platforms is beyond the scope of this work, but the interested reader is referred toAdjali et al (2020) for a recent contribution to the subject.…”
mentioning
confidence: 99%
“…e named entity disambiguation system DBpedia Spotlight [11] mainly relies on entity context similarity measures for disambiguation. Adjali et al [12] used entity semantic similarity, context similarity, and mention probability for entity disambiguation. Hoffart et al [13] fused features that mentioned probability, entity similarity, and similarity of candidate entities based on graph links and used linear models to fuse these features for entity disambiguation.…”
Section: Entity Features-based Entity Disambiguation Methodmentioning
confidence: 99%
“…"-" in the column of Algorithms means the corresponding model uses a relatively simple algorithm to select the mapping entity, such as a linear combination of features. [42] pre-trained description, type -Deep-ED (EMNLP 2017) [43] pre-trained description, context MLP NeuPL (CIKM 2017) [44] learned description, context -Eshel et al (CoNLL 2017) [45] learned context MLP MR-Deep-ED (ACL 2018) [46] pre-trained description, context MLP Moon et al (ACL 2018) [47] pre-trained context - [51] pre-trained description, context MLP SGTB-BiBSG (NAACL 2018) [52] pre-trained description, context -NCEL (COLING 2018) [53] learned context MLP Le and Titov (ACL 2019) [54] pre-trained type MLP Le and Titov (ACL 2019) [55] pre-trained [57] description, context MLP RRWEL (IJCAI 2019) [58] learned surface form, description graph-based RLEL (WWW 2019) [59] pre-trained description, context RL DCA (EMNLP 2019) [60] pre-trained surface form, description, context MLP, RL Gillick et al (CoNLL 2019) [61] pre-trained description MLP E-ELMo (arXiv 2019) [62] learned context MLP FGS2EE (ACL 2020) [63] pre-trained description, context, type MLP ET4EL (AAAI 2020) [64] learned -Chen et al (AAAI 2020) [65] pre-trained description, context, type MLP REL (SIGIR 2020) [66] learned context MLP SeqGAT (WWW 2020) [67] description MLP DGCN (WWW 2020) [31] description, context, type MLP BLINK (EMNLP 2020) [68] description -ELQ (EMNLP 2020) [69] description -GNED (KBS 2020) [70] pre-trained description, context MLP JMEL (ECIR 2020) [71] learned MLP Yamada et al (arXiv 2020) [72] context -M3 (AAAI 2021) [73] -Bi-MPR (AAAI 2021) [74] description MLP Chen et al (AAAI 2021) [75] learned surface form MLP CHOLAN (EACL 2021) [76] -Zhang et al (DASFAA 2021)…”
Section: Word Embeddingmentioning
confidence: 99%