Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.286
|View full text |Cite
|
Sign up to set email alerts
|

Label-Enhanced Hierarchical Contextualized Representation for Sequential Metaphor Identification

Abstract: Recent metaphor identification approaches mainly consider the contextual text features within a sentence or introduce external linguistic features to the model. But they usually ignore the extra information that the data can provide, such as the contextual metaphor information and broader discourse information. In this paper, we propose a model augmented with hierarchical contextualized representation to extract more information from both sentence-level and discourse-level. At the sentence level, we leverage t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 38 publications
0
1
0
Order By: Relevance
“…Metaphor Identification Methods Similar to the construction of the datasets, metaphor identification can be viewed as a word-pair classification problem or a word sequence labeling problem. Most of the current models depend on recurrent neural networks (Do Dinh and Gurevych, 2016;Rei et al, 2017;Gao et al, 2018) and pre-trained language models (Leong et al, 2020;Dankers et al, 2020;Su et al, 2020;Li et al, 2021;Ge et al, 2022;Aghazadeh et al, 2022;Li et al, 2023). (Lee et al, 2017(Lee et al, , 2018, which is a span-based relation extraction model.…”
Section: Related Workmentioning
confidence: 99%
“…Metaphor Identification Methods Similar to the construction of the datasets, metaphor identification can be viewed as a word-pair classification problem or a word sequence labeling problem. Most of the current models depend on recurrent neural networks (Do Dinh and Gurevych, 2016;Rei et al, 2017;Gao et al, 2018) and pre-trained language models (Leong et al, 2020;Dankers et al, 2020;Su et al, 2020;Li et al, 2021;Ge et al, 2022;Aghazadeh et al, 2022;Li et al, 2023). (Lee et al, 2017(Lee et al, , 2018, which is a span-based relation extraction model.…”
Section: Related Workmentioning
confidence: 99%