Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1570
|View full text |Cite
|
Sign up to set email alerts
|

Word2Sense: Sparse Interpretable Word Embeddings

Abstract: We present an unsupervised method to generate Word2Sense word embeddings that are interpretable-each dimension of the embedding space corresponds to a fine-grained sense, and the non-negative value of the embedding along the j-th dimension represents the relevance of the j-th sense to the word. The underlying LDA-based generative model can be extended to refine the representation of a polysemous word in a short context, allowing us to use the embeddings in contextual tasks. On computational NLP tasks, Word2Sen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
32
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 40 publications
(32 citation statements)
references
References 33 publications
0
32
0
Order By: Relevance
“…Subramanian et al [37] utilize denoising k-sparse autoencoder to generate efficient and interpretable distributed word representations. The work by Panigrahi et al [26] is to the best our knowledge, among the existing research, closest to our work. The authors propose Word2Sense word embeddings in which each dimension of the embedding space corresponds to a fine-grained sense, and the non-negative value of the embedding along a dimension represents the relevance of the sense to the word.…”
Section: Interpretable Word Embeddingsmentioning
confidence: 52%
“…Subramanian et al [37] utilize denoising k-sparse autoencoder to generate efficient and interpretable distributed word representations. The work by Panigrahi et al [26] is to the best our knowledge, among the existing research, closest to our work. The authors propose Word2Sense word embeddings in which each dimension of the embedding space corresponds to a fine-grained sense, and the non-negative value of the embedding along a dimension represents the relevance of the sense to the word.…”
Section: Interpretable Word Embeddingsmentioning
confidence: 52%
“…Research has also been done to produce more interpretable static word embeddings e.g. (Ş enel et al, 2020;Panigrahi et al, 2019). For contextual embeddings, Aloui et al (2020) produced embeddings with semantic super-senses as dimensions, but these are quite broad.…”
Section: Interpretable Word Embeddingsmentioning
confidence: 99%
“…Whilst efforts have been made to produce more interpretable embeddings e.g. (Ş enel et al, 2020;Panigrahi et al, 2019), the general approach has been to interpret them in relation to each-other. For example, the relative distance between word embeddings can indicate their semantic similarity (Schnabel et al, 2015).…”
Section: Introductionmentioning
confidence: 99%
“…It is noteworthy that we constrain the nonnegativity of B t and C t to learn sparse interpretable word embeddings (Murphy et al, 2012;Luo et al, 2015), so as to capture the polysemous nature of words (Panigrahi et al, 2019). With non-negativity constraints, words are represented by limited dimensions (Murphy et al, 2012).…”
Section: Objective Functionmentioning
confidence: 99%