Proceedings of the Workshop on Figurative Language Processing 2018
DOI: 10.18653/v1/w18-0915
|View full text |Cite
|
Sign up to set email alerts
|

Conditional Random Fields for Metaphor Detection

Abstract: We present an algorithm for detecting metaphor in sentences which was used in Shared Task on Metaphor Detection by First Workshop on Figurative Language Processing. The algorithm is based on different features and Conditional Random Fields.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 6 publications
0
10
0
Order By: Relevance
“…Many neural models with various features and architectures were introduced in the 2018 VUA Metaphor Detection Shared Task. They include LSTM-based models and CRFs augmented by linguistic features, such as WordNet, POS tags, concreteness score, unigrams, lemmas, verb clusters, and sentence-length manipulation (Swarnkar and Singh, 2018;Pramanick et al, 2018;Mosolova et al, 2018;Bizzoni and Ghanimifard, 2018;Wu et al, 2018). Researchers also studied different word embeddings, such as embeddings trained from corpora representing different levels of language mastery (Stemle and Onysko, 2018) and binarized vectors that reflect the General Inquirer dictionary category of a word (Mykowiecka et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…Many neural models with various features and architectures were introduced in the 2018 VUA Metaphor Detection Shared Task. They include LSTM-based models and CRFs augmented by linguistic features, such as WordNet, POS tags, concreteness score, unigrams, lemmas, verb clusters, and sentence-length manipulation (Swarnkar and Singh, 2018;Pramanick et al, 2018;Mosolova et al, 2018;Bizzoni and Ghanimifard, 2018;Wu et al, 2018). Researchers also studied different word embeddings, such as embeddings trained from corpora representing different levels of language mastery (Stemle and Onysko, 2018) and binarized vectors that reflect the General Inquirer dictionary category of a word (Mykowiecka et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…nsu ai (Mosolova et al, 2018) used linguistic features based on unigrams, lemmas, POS tags, topical LDAs, concreteness, WordNet, VerbNet and verb clusters and trained a Conditional Random Field (CRF) model with gradient descent using the L-BFGS method to generate predictions.…”
Section: System Descriptionsmentioning
confidence: 99%
“…Furthermore, the embeddings were frozen for the entire training period also to ensure a fair comparison. Similarly, for our CRF model from sklearn-crfsuite 73 , we trained the CRF model with monolingual (Setswana and IsiXhosa) FastText embeddings, and the best hyper-parameters are used to train a new CRF using cross-lingual embeddings. As a baseline of CRF, we also investigated Feature Engineering (FE) as input to CRF.…”
Section: Ner Modelmentioning
confidence: 99%