Proceedings of the Workshop on Figurative Language Processing 2018
DOI: 10.18653/v1/w18-0913
|View full text |Cite
|
Sign up to set email alerts
|

Neural Metaphor Detecting with CNN-LSTM Model

Abstract: Metaphors are figurative languages widely used in daily life and literatures. It's an important task to detect the metaphors evoked by texts. Thus, the metaphor shared task is aimed to extract metaphors from plain texts at word level. We propose to use a CNN-LSTM model for this task. Our model combines CNN and LSTM layers to utilize both local and long-range contextual information for identifying metaphorical information. In addition, we compare the performance of the softmax classifier and conditional random … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
81
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 64 publications
(83 citation statements)
references
References 10 publications
2
81
0
Order By: Relevance
“…In addition, for our model specifically, Conversation genre contexts are much shorter on average (23.8 vs. 97.3). Our best performing model (ELMo LAC) is within 0.4 F1 score of the first-place model in the VUA shared task (Wu et al, 2018). The GloVe LAC model would also have obtained second place at 65.2 F1, yet is considerably simpler than the systems used in the shared task, which employed ensembles of deep neural architectures and hand-engineered, metaphor-specific features.…”
Section: Resultsmentioning
confidence: 81%
See 2 more Smart Citations
“…In addition, for our model specifically, Conversation genre contexts are much shorter on average (23.8 vs. 97.3). Our best performing model (ELMo LAC) is within 0.4 F1 score of the first-place model in the VUA shared task (Wu et al, 2018). The GloVe LAC model would also have obtained second place at 65.2 F1, yet is considerably simpler than the systems used in the shared task, which employed ensembles of deep neural architectures and hand-engineered, metaphor-specific features.…”
Section: Resultsmentioning
confidence: 81%
“…At the recent VU Amsterdam (VUA) metaphor identification shared task (Leong et al, 2018), neural approaches dominated, with most teams using LSTMs trained on word embeddings and additional linguistic features, such as semantic classes and part of speech tags (Wu et al, 2018;Stemle and Onysko, 2018;Mykowiecka et al, 2018;Swarnkar and Singh, 2018). Most recently, Gao et al (2018) revisited this task, reporting state-of-the-art results with BiLSTMs and contextualized word embeddings (Peters et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…It assigns the metaphor label if the word is annotated metaphorically more frequently than as literally in the training set, and the literal label otherwise. We also compare our (2) a neural similarity network with skip-gram word embeddings (Rei et al, 2017), (3) a balanced logistic regression classifier on target verb lemma that uses a set of features based on multisense abstractness rating (Köper and im Walde, 2017), and (4) a CNN-LSTM ensemble model with weighted-softmax classifier which incorporates pre-trained word2vec, POS tags, and word cluster features (Wu et al, 2018). 2 We experiment with both sequence labeling model (SEQ) and classification model (CLS) for the verb classification task, and the sequence labeling model (SEQ) for the sequence labeling task.…”
Section: Comparison Systemsmentioning
confidence: 99%
“…The most recent approaches (Wu et al, 2018;Gao et al, 2018) treat this as a sequence tagging task: the classified labels are only conditioned on BiLSTM (Graves and Schmidhuber, 2005) hidden states of target words. This approach is not tailormade for metaphors; it is the same procedure to that used in other sequence tagging tasks, such as Part-of-Speech (PoS) tagging (Plank et al, 2016) and Named Entity Recognition (NER) (Lample et al, 2016).…”
Section: Introductionmentioning
confidence: 99%