Proceedings of the Workshop on Figurative Language Processing 2018
DOI: 10.18653/v1/w18-0914
|View full text |Cite
|
Sign up to set email alerts
|

Di-LSTM Contrast : A Deep Neural Network for Metaphor Detection

Abstract: The contrast between the contextual and general meaning of a word serves as an important clue for detecting its metaphoricity. In this paper, we present a deep neural architecture for metaphor detection which exploits this contrast. Additionally, we also use cost-sensitive learning by re-weighting examples, and baseline features like concreteness ratings, POS and WordNet-based features. The best performing system of ours achieves an overall F1 score of 0.570 on All POS category and 0.605 on the Verbs category … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 19 publications
(17 citation statements)
references
References 22 publications
0
17
0
Order By: Relevance
“…Many neural models with various features and architectures were introduced in the 2018 VUA Metaphor Detection Shared Task. They include LSTM-based models and CRFs augmented by linguistic features, such as WordNet, POS tags, concreteness score, unigrams, lemmas, verb clusters, and sentence-length manipulation (Swarnkar and Singh, 2018;Pramanick et al, 2018;Mosolova et al, 2018;Bizzoni and Ghanimifard, 2018;Wu et al, 2018). Researchers also studied different word embeddings, such as embeddings trained from corpora representing different levels of language mastery (Stemle and Onysko, 2018) and binarized vectors that reflect the General Inquirer dictionary category of a word (Mykowiecka et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…Many neural models with various features and architectures were introduced in the 2018 VUA Metaphor Detection Shared Task. They include LSTM-based models and CRFs augmented by linguistic features, such as WordNet, POS tags, concreteness score, unigrams, lemmas, verb clusters, and sentence-length manipulation (Swarnkar and Singh, 2018;Pramanick et al, 2018;Mosolova et al, 2018;Bizzoni and Ghanimifard, 2018;Wu et al, 2018). Researchers also studied different word embeddings, such as embeddings trained from corpora representing different levels of language mastery (Stemle and Onysko, 2018) and binarized vectors that reflect the General Inquirer dictionary category of a word (Mykowiecka et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…At the recent VU Amsterdam (VUA) metaphor identification shared task (Leong et al, 2018), neural approaches dominated, with most teams using LSTMs trained on word embeddings and additional linguistic features, such as semantic classes and part of speech tags (Wu et al, 2018;Stemle and Onysko, 2018;Mykowiecka et al, 2018;Swarnkar and Singh, 2018). Most recently, Gao et al (2018) revisited this task, reporting state-of-the-art results with BiLSTMs and contextualized word embeddings (Peters et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…DeepReader (Swarnkar and Singh, 2018) The authors present a neural network architecture that concatenates hidden states of forward and backward LSTMs, with feature selection and classification. The authors also show that reweighting examples and adding linguistic features (WordNet, POS, concreteness) helps improve performance further.…”
Section: System Descriptionsmentioning
confidence: 99%