Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.194
|View full text |Cite
|
Sign up to set email alerts
|

Definition Modelling for Appropriate Specificity

Abstract: Definition generation techniques aim to generate a definition of a target word or phrase given a context. In previous studies, researchers have faced various issues such as the out-of-vocabulary problem and over/underspecificity problems. Over-specific definitions present narrow word meanings, whereas under-specific definitions present general and context-insensitive meanings. Herein, we propose a method for definition generation with appropriate specificity. The proposed method addresses the aforementioned pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(10 citation statements)
references
References 12 publications
0
10
0
Order By: Relevance
“…NIST focuses on content words by giving more weightage to them. This makes NIST more informative than solely assigning an equal weight to each n-grams as BLEU (Huang et al, 2021).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…NIST focuses on content words by giving more weightage to them. This makes NIST more informative than solely assigning an equal weight to each n-grams as BLEU (Huang et al, 2021).…”
Section: Discussionmentioning
confidence: 99%
“…Bevilacqua et al (2020) employed a novel span-based encoding scheme to fine-tune a pre-trained English Encoder-Decoder system to generate definitions. Huang et al (2021) leveraged the T5 model (Raffel et al, 2019) for this task and introduced a re-ranking mechanism to model specificity in definitions.…”
Section: Related Workmentioning
confidence: 99%
“…Due to the complex training objectives and large hyperparameter groups, large-scale pre-training models can effectively extract features from a large amount of supervised and unsupervised data. By storing the learned knowledge in parameters and fine-tuning the model for specific tasks, the same model can be applied to a series of downstream natural language processing tasks (Han et al, 2021a).…”
Section: Prompt Learningmentioning
confidence: 99%
“…Prompt learning is a method of fully learning knowledge by adding additional text to the model's input. Prompt can be divided into artificial and automatic construction according to the text attached to the input (Han et al, 2021a). Among them, automatically constructed prompts are divided into discrete and continuous ones.…”
Section: Prompt Learningmentioning
confidence: 99%
“…Many previous work used additional data to improve the performance of generation, such as example sentences (Gadetsky et al, 2018;Chang et al, 2018;Ishiwatari et al, 2019;Kong et al, 2020) and semantic features (Yang et al, 2020). Some studies also investigated how to employ PLMs for this task (Reid et al, 2020;Bevilacqua et al, 2020;Huang et al, 2021;Kong et al, 2022).…”
Section: Introductionmentioning
confidence: 99%