Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1165
|View full text |Cite
|
Sign up to set email alerts
|

LSTMEmbed: Learning Word and Sense Representations from a Large Semantically Annotated Corpus with Long Short-Term Memories

Abstract: While word embeddings are now a de facto standard representation of words in most NLP tasks, recently the attention has been shifting towards vector representations which capture the different meanings, i.e., senses, of words. In this paper we explore the capabilities of a bidirectional LSTM model to learn representations of word senses from semantically annotated corpora. We show that the utilization of an architecture that is aware of word order, like an LSTM, enables us to create better representations. We … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 17 publications
(23 citation statements)
references
References 47 publications
0
23
0
Order By: Relevance
“…This allows the model to learn more accurate semantic relations between the spotted text and its visual. That sense ID can be used to extract any word vector from any pre-trained sense embedding [14,28,31,4,13]. It consists of 1800 images with id senses (e.g.…”
Section: Coco-text With Visual Contextmentioning
confidence: 99%
See 2 more Smart Citations
“…This allows the model to learn more accurate semantic relations between the spotted text and its visual. That sense ID can be used to extract any word vector from any pre-trained sense embedding [14,28,31,4,13]. It consists of 1800 images with id senses (e.g.…”
Section: Coco-text With Visual Contextmentioning
confidence: 99%
“…LSTMEmbed [13]: LSTMEmbed is the most recent model in sense embeddings. It utilizes a BiLSTM architecture to learn the word and sense embeddings from annotated corpora.…”
Section: Relational Word Embeddings [3] (Rwe)mentioning
confidence: 99%
See 1 more Smart Citation
“…Such ranking can be computed in all sense embeddings that map both terminological and conceptual representations onto a shared semantic space, which is rather common in recent sense embeddings such as, e.g., [57,65,66,[70][71][72].…”
Section: Ranked Similaritymentioning
confidence: 99%
“…Cognitive strategies, algorithms and systems dealing with semantic similarity, sense embeddings and new similarity metrics should be tested on the SID dataset. Their performances in both tasks can be compared against those provided in [1] , obtained by experimenting with six recent and influential sets of embeddings, namely LessLex [1] , NASARI [7] , DeConf [8] , SenseEmbed [9] , SW2V [10] , and LSTMEmbed [11] .…”
Section: Value Of the Datamentioning
confidence: 99%