Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2014
DOI: 10.3115/v1/d14-1161
|View full text |Cite
|
Sign up to set email alerts
|

Word Semantic Representations using Bayesian Probabilistic Tensor Factorization

Abstract: Many forms of word relatedness have been developed, providing different perspectives on word similarity. We introduce a Bayesian probabilistic tensor factorization model for synthesizing a single word vector representation and per-perspective linear transformations from any number of word similarity matrices. The resulting word vectors, when combined with the per-perspective linear transformation, approximately recreate while also regularizing and generalizing, each word similarity perspective.Our method can c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 33 publications
(33 citation statements)
references
References 11 publications
0
33
0
Order By: Relevance
“…Linguistic Constraints and Dictionaries. We use the same set of monolingual constraints as LEAR : synonymy and antonymy constraints from (Zhang et al, 2014;Ono et al, 2015) are extracted from WordNet and Roget's Thesaurus (Kipfer, 2009). As in other work on LE specialisation (Nguyen et al, 2017;Nickel and Kiela, 2017), asymmetric LE constraints are extracted from WordNet, and we collect both direct and indirect LE pairs (i.e., (beagle, dog), (dog, an-7 The proposed CLEAR method is by design agnostic of input distributional vectors and its main purpose is to support fine-tuning of a wide spectrum of input vectors.…”
Section: Methodsmentioning
confidence: 99%
“…Linguistic Constraints and Dictionaries. We use the same set of monolingual constraints as LEAR : synonymy and antonymy constraints from (Zhang et al, 2014;Ono et al, 2015) are extracted from WordNet and Roget's Thesaurus (Kipfer, 2009). As in other work on LE specialisation (Nguyen et al, 2017;Nickel and Kiela, 2017), asymmetric LE constraints are extracted from WordNet, and we collect both direct and indirect LE pairs (i.e., (beagle, dog), (dog, an-7 The proposed CLEAR method is by design agnostic of input distributional vectors and its main purpose is to support fine-tuning of a wide spectrum of input vectors.…”
Section: Methodsmentioning
confidence: 99%
“…Additional experiments with other word vectors, e.g., with CONTEXT2VEC (Melamud et al, 2016a) (which uses bidirectional LSTMs (Hochreiter and Schmidhuber, 1997) for context modeling), and with dependency-word based embeddings (Bansal et al, 2014;Melamud et al, 2016b) lead to similar results and same conclusions. 5 We have experimented with another set of constraints used in prior work (Zhang et al, 2014;Ono et al, 2015), reaching similar conclusions: these were extracted from Word-Net (Fellbaum, 1998) and Roget (Kipfer, 2009), and comprise 1,023,082 synonymy pairs and 380,873 antonymy pairs. omitted only before the final output layer to enable full-range predictions (see Fig.…”
Section: Network Design and Parametersmentioning
confidence: 81%
“…We work with Wikipediatrained FASTTEXT embeddings (Bojanowski et al, 2017). We take English constraints from previous work -synonyms and antonyms were created from WordNet and Roget's Thesaurus (Zhang et al, 2014;Ono et al, 2015); LE constraints were collected from WordNet by and contain both direct and transitively obtained LE pairs. We retain the constraints for which both words exist in the trimmed (200K) FASTTEXT vocabulary, resulting in a total of 1,493,686 LE, 521,037 synonym, and 141,311 antonym pairs.…”
Section: Discussionmentioning
confidence: 99%