2020
DOI: 10.48550/arxiv.2010.05670
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Modelling Lexical Ambiguity with Density Matrices

Abstract: Words can have multiple senses. Compositional distributional models of meaning have been argued to deal well with finer shades of meaning variation known as polysemy, but are not so well equipped to handle word senses that are etymologically unrelated, or homonymy. Moving from vectors to density matrices allows us to encode a probability distribution over different senses of a word, and can also be accommodated within a compositional distributional model of meaning. In this paper we present three new neural mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 9 publications
0
3
0
Order By: Relevance
“…Similar questions had been asked earlier in quantum foundations when considering causal inference within the quantum context, mostly be Leifer and Spekkens [74,75,43,3]. The recent paper [83] presents three neural models for learning density matrices from a corpus.…”
Section: *Black* Hatmentioning
confidence: 75%
See 2 more Smart Citations
“…Similar questions had been asked earlier in quantum foundations when considering causal inference within the quantum context, mostly be Leifer and Spekkens [74,75,43,3]. The recent paper [83] presents three neural models for learning density matrices from a corpus.…”
Section: *Black* Hatmentioning
confidence: 75%
“…Soon after the development of the quantum model of language it was put to the test, on classical hardware of course, and the model greatly outperformed all other available models for a number of standard NLP tasks [57,65]. Despite the growing dominance of machine learning, successes have continued until recently [114,83,115]. With the 1st DisCoCat paper appearing in 2008 in a somewhat obscure venue [25], independently Baroni and Zamparelli also proposed the adjective-noun model of Section 2.1, again strongly supported by experimental evidence [10].…”
Section: Classical Successes But Costlymentioning
confidence: 99%
See 1 more Smart Citation