Proceedings of the Workshop on Figurative Language Processing 2018
DOI: 10.18653/v1/w18-0918
|View full text |Cite
|
Sign up to set email alerts
|

Using Language Learner Data for Metaphor Detection

Abstract: This article describes the system that participated in the shared task (ST) on metaphor detection (Leong et al., 2018) on the Vrije University Amsterdam Metaphor Corpus (VUA). The ST was part of the workshop on processing figurative language at the 16th annual conference of the North American Chapter of the Association for Computational Linguistics (NAACL2018). The system combines a small assertion of trending techniques, which implement matured methods from NLP and ML; in particular, the system uses word embe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0
2

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 17 publications
(15 citation statements)
references
References 16 publications
0
13
0
2
Order By: Relevance
“…At the recent VU Amsterdam (VUA) metaphor identification shared task (Leong et al, 2018), neural approaches dominated, with most teams using LSTMs trained on word embeddings and additional linguistic features, such as semantic classes and part of speech tags (Wu et al, 2018;Stemle and Onysko, 2018;Mykowiecka et al, 2018;Swarnkar and Singh, 2018). Most recently, Gao et al (2018) revisited this task, reporting state-of-the-art results with BiLSTMs and contextualized word embeddings (Peters et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…At the recent VU Amsterdam (VUA) metaphor identification shared task (Leong et al, 2018), neural approaches dominated, with most teams using LSTMs trained on word embeddings and additional linguistic features, such as semantic classes and part of speech tags (Wu et al, 2018;Stemle and Onysko, 2018;Mykowiecka et al, 2018;Swarnkar and Singh, 2018). Most recently, Gao et al (2018) revisited this task, reporting state-of-the-art results with BiLSTMs and contextualized word embeddings (Peters et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…bot.zen (Stemle and Onysko, 2018) used word embeddings from different standard corpora representing different levels of language mastery, encoding each word in a sentence into multiple vector-based embeddings which are then fed into an LSTM RNN network architecture. Specifically, the backpropagation step was performed using weightings computed based on the logarithmic function of the inverse of the count of the metaphors and non-metaphors.…”
Section: System Descriptionsmentioning
confidence: 99%
“…As part of the NAACL 2018 Metaphor Shared Task (Leong et al, 2018), many researchers proposed neural models that mainly employ LSTMs (Hochreiter and Schmidhuber, 1997) with pre-trained word embeddings to identify metaphors on the word-level. The best performing systems are: THU NGN (Wu et al, 2018), OCOTA (Bizzoni and Ghanimifard, 2018) and bot.zen (Stemle and Onysko, 2018). Gao et al (2018) were the first to employ the deep contextualised word representation ELMo (Peters et al, 2018), combined with pre-trained GloVe (Pennington et al, 2014) embeddings to train bidirectional LSTM-based models.…”
Section: Related Workmentioning
confidence: 99%