2016
DOI: 10.48550/arxiv.1601.03764
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Linear Algebraic Structure of Word Senses, with Applications to Polysemy

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(26 citation statements)
references
References 0 publications
0
26
0
Order By: Relevance
“…Sparse coding is usually solved for a given K and k by using alternating minimization such as k-svd (Aharon et al 2006) to find the A i s that minimize the following L 2 -reconstruction error: vw − K j=1 α (w,j) Aj . (Arora et al 2016b) show that multiple senses of a word reside as a linear superposition within the word embedding and can be recovered by simple sparse coding. Therefore, one can use the sparse coding of word vectors to detect multiple senses of words.…”
Section: Averaging Vs Partition Averagingmentioning
confidence: 99%
See 3 more Smart Citations
“…Sparse coding is usually solved for a given K and k by using alternating minimization such as k-svd (Aharon et al 2006) to find the A i s that minimize the following L 2 -reconstruction error: vw − K j=1 α (w,j) Aj . (Arora et al 2016b) show that multiple senses of a word reside as a linear superposition within the word embedding and can be recovered by simple sparse coding. Therefore, one can use the sparse coding of word vectors to detect multiple senses of words.…”
Section: Averaging Vs Partition Averagingmentioning
confidence: 99%
“…10 We use script for preprocessing the dataset. 11 We consider several embedding baselines mostly taken from (Mekala et al 2017;Wu et al 2018;Arora et al 2016b). More details on experimental settings and hyper-parameters' values are described in the Supplementary material 5 .…”
Section: Text Classification Taskmentioning
confidence: 99%
See 2 more Smart Citations
“…Given a target word, the task (Arora et al, 2016;Sun et al, 2017) is to identify the true context corresponding to a sense of the target word out of 10 other randomly selected false contexts, where a context is presented by similar words. For example, two of the true contexts for the target word bank are water,land,river,... and institution,deposits,money.... We use the R1 dataset from Sun et al (2017), which consists of 137 word types and 535 queries.…”
Section: Word Context Relevance (Wcr)mentioning
confidence: 99%