2022
DOI: 10.1007/978-3-031-17189-5_24
|View full text |Cite
|
Sign up to set email alerts
|

DAMO-NLP at NLPCC-2022 Task 2: Knowledge Enhanced Robust NER for Speech Entity Linking

Abstract: Speech Entity Linking aims to recognize and disambiguate named entities in spoken languages. Conventional methods suffer gravely from the unfettered speech styles and the noisy transcripts generated by ASR systems. In this paper, we propose a novel approach called Knowledge Enhanced Named Entity Recognition (KENER), which focuses on improving robustness through painlessly incorporating proper knowledge in the entity recognition stage and thus improving the overall performance of entity linking. KENER first ret… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…Amidst the rapid ascent of generative big model technology, a plethora of advanced pre-training word vector methodologies have emerged, encompassing, but not confined to: the MSE model [23], adept at entity linking tasks, aimed at disambiguating named entities within spoken language; the Generalized Word Vector Model CoROM [24], and the GTE model [25] developed by the Tongyi Lab; the BCE [26] model by Youdao, renowned for its robust bilingual and cross-linguistic semantic characterization prowess; and the M3E model [27] by the Moka team. These expansive word vectors, trained on colossal corpora, not only offer heightened accuracy and semantic richness at the word level but also exhibit robust generalization capabilities, demonstrating exceptional performance across a myriad of natural language processing tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Amidst the rapid ascent of generative big model technology, a plethora of advanced pre-training word vector methodologies have emerged, encompassing, but not confined to: the MSE model [23], adept at entity linking tasks, aimed at disambiguating named entities within spoken language; the Generalized Word Vector Model CoROM [24], and the GTE model [25] developed by the Tongyi Lab; the BCE [26] model by Youdao, renowned for its robust bilingual and cross-linguistic semantic characterization prowess; and the M3E model [27] by the Moka team. These expansive word vectors, trained on colossal corpora, not only offer heightened accuracy and semantic richness at the word level but also exhibit robust generalization capabilities, demonstrating exceptional performance across a myriad of natural language processing tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Knowledge Retrieval (KR) is crucial in supporting knowledge-intensive multi-modal applications, such as visual question answering (VQA) (Ma et al 2023), multimodal entity linking (Huang et al 2022) and multi-modal dialogue (Ma et al 2022). In these applications, the information available within the multi-modal contexts may be insufficient, necessitating the acquisition of external knowledge.…”
Section: Introductionmentioning
confidence: 99%