2023
DOI: 10.1016/j.knosys.2023.110282
|View full text |Cite
|
Sign up to set email alerts
|

Lexical knowledge enhanced text matching via distilled word sense disambiguation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 42 publications
0
2
0
Order By: Relevance
“…Each synset is described by its definition, surface forms (lemmas), examples of usage (where available), and the relations between synsets, e.g., hypernymy (is-a), meronymy (is-part) or troponymy (manner-of). WN's primary use in NLP is as a sense inventory (Agirre and Edmonds, 2007;Pu et al, 2023).…”
Section: Datasetsmentioning
confidence: 99%
“…Each synset is described by its definition, surface forms (lemmas), examples of usage (where available), and the relations between synsets, e.g., hypernymy (is-a), meronymy (is-part) or troponymy (manner-of). WN's primary use in NLP is as a sense inventory (Agirre and Edmonds, 2007;Pu et al, 2023).…”
Section: Datasetsmentioning
confidence: 99%
“…Literature [16] suggested a method to enhance text match by integrating lexical knowledge from external resources and simulating the semantics of potentially ambiguous words. Through simulation, it has been corroborated that this method greatly improves the text-matching performance of a Chinese corpus.…”
Section: Introductionmentioning
confidence: 99%
“…In 2021, Barba et al [16] introduced a transformer-based neural architecture that reformulated WSD as a span extraction problem, attaining a wonderful performance on the English WSD task. In 2023, Pu et al [17] proposed a lightweight WSD model, which uses the vocabulary knowledge to extract a BERTbased pretraining model. These research results in remarkable gains in model development efficiency and bringing significant performance improvements in NLP.…”
Section: Introductionmentioning
confidence: 99%