Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing 2015
DOI: 10.18653/v1/d15-1177
|View full text |Cite
|
Sign up to set email alerts
|

Syntax-Aware Multi-Sense Word Embeddings for Deep Compositional Models of Meaning

Abstract: Deep compositional models of meaning acting on distributional representations of words in order to produce vectors of larger text constituents are evolving to a popular area of NLP research. We detail a compositional distributional framework based on a rich form of word embeddings that aims at facilitating the interactions between words in the context of a sentence. Embeddings and composition layers are jointly learned against a generic objective that enhances the vectors with syntactic information from the su… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
33
0
1

Year Published

2016
2016
2021
2021

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 51 publications
(35 citation statements)
references
References 25 publications
1
33
0
1
Order By: Relevance
“…Additionally, this table shows the state-of-the-art results for the two used datasets. As noticed, the results from our combined approach are close to the reference results, nonetheless, ours is a much more simple approach (for example, [7] reports a recursive neural network using syntax-aware and multi-sense word embeddings). Table 5: F 1 results in several paraphrase categories using different similarity and distance measures.…”
Section: Complementary Of the Proposed Measuressupporting
confidence: 78%
“…Additionally, this table shows the state-of-the-art results for the two used datasets. As noticed, the results from our combined approach are close to the reference results, nonetheless, ours is a much more simple approach (for example, [7] reports a recursive neural network using syntax-aware and multi-sense word embeddings). Table 5: F 1 results in several paraphrase categories using different similarity and distance measures.…”
Section: Complementary Of the Proposed Measuressupporting
confidence: 78%
“…Furthermore, the recent distributed word embedding techniques, such as Glove (Pennington et al, 2014) and W2V (Mikolov et al, 2013), have been shown to encode limited syntax knowledge of the given corpus (Andreas and Klein, 2014). This shortcoming has also promoted recent research on creating syntax-aware word embedding, which enhances the distributed embedding vectors with position information of the word within its surrounding context (Cheng and Kartsaklis, 2015), which again encodes limited syntax information.…”
Section: Syntax Encodingmentioning
confidence: 99%
“…Multi-sense word embedding is also a popular way to represent polysemous words (Reisinger and Mooney, 2010;Huang et al, 2012;Neelakantan et al, 2014;Guo et al, 2014;Li and Jurafsky, 2015;Iacobacci et al, 2015;Cheng and Kartsaklis, 2015;Lee and Chen, 2017). However, these method using contextual difference for sense clustering to decide senses are so sensitive to contextual variation and usage of word, therefore may embed a single sense into several vectors.…”
Section: Related Workmentioning
confidence: 99%
“…Multi-sense word embeddings are popular choices to represent polysemous words (Reisinger and Mooney, 2010;Huang et al, 2012;Neelakantan et al, 2014;Cheng and Kartsaklis, 2015;Lee and Chen, 2017). These methods learn senses of words automatically by clustering contexts they appear in.…”
Section: Introductionmentioning
confidence: 99%