2022
DOI: 10.48550/arxiv.2211.04872
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Visual Named Entity Linking: A New Dataset and A Baseline

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 31 publications
0
3
0
Order By: Relevance
“…In this chapter, we have selected a variety of representative approaches for evaluation. These include CLIP4Clip (Luo et al, 2022) in the domain of video retrieval, a purely textual entity linking approach BLINK (Logeswaran et al, 2019), the multimodal entity linking method V2VTEL (Sun et al, 2022), and other multimodal retrieval methods such as AltCLIP and Chinese-CLIP. The experimental metrics primarily utilized are Recall and Mean Reciprocal Rank (MRR) at K. The experimental outcomes are as exhibited in the Table 2.…”
Section: Static Experiments Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…In this chapter, we have selected a variety of representative approaches for evaluation. These include CLIP4Clip (Luo et al, 2022) in the domain of video retrieval, a purely textual entity linking approach BLINK (Logeswaran et al, 2019), the multimodal entity linking method V2VTEL (Sun et al, 2022), and other multimodal retrieval methods such as AltCLIP and Chinese-CLIP. The experimental metrics primarily utilized are Recall and Mean Reciprocal Rank (MRR) at K. The experimental outcomes are as exhibited in the Table 2.…”
Section: Static Experiments Resultsmentioning
confidence: 99%
“…Existing research primarily focuses on static image-text pairs. Researchers (Adjali et al, 2020a,b;Zhou et al, 2021;Wang et al, 2022b,a;Gan et al, 2021;Sun et al, 2022;Chengmei et al, 2023;Xing et al, 2023;Yao et al, 2023;Zhang et al, 2021) constructed multiple datasets for different scenarios or proposed various multimodal representation methods, integrating features from different modalities to facilitate entity mention and entity matching.…”
Section: Multi-modal Entity Linkingmentioning
confidence: 99%
See 1 more Smart Citation