2010
DOI: 10.1017/s1351324910000148
|View full text |Cite
|
Sign up to set email alerts
|

A non-negative tensor factorization model for selectional preference induction

Abstract: Link to this article: http://journals.cambridge.org/abstract_S1351324910000148How to cite this article: TIM VAN DE CRUYS (2010). A non-negative tensor factorization model for selectional preference induction. Natural Language Engineering, 16, pp 417-437 AbstractThe distributional similarity methods have proven to be a valuable tool for the induction of semantic similarity. Until now, most algorithms use two-way co-occurrence data to compute the meaning of words. Co-occurrence frequencies, however, need not be … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
16
0

Year Published

2011
2011
2015
2015

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 35 publications
(17 citation statements)
references
References 35 publications
1
16
0
Order By: Relevance
“…A quantitative evaluation on a pseudo-disambiguation task shows that our models achieve state of the art performance. The results for our two-way neural network are on a par with Erk et al's (2010) similaritybased approach, while our three-way neural network slightly outperforms the tensor-based factorization model (Van de Cruys, 2009) for multi-way selectional preference induction.…”
Section: Discussionsupporting
confidence: 56%
See 1 more Smart Citation
“…A quantitative evaluation on a pseudo-disambiguation task shows that our models achieve state of the art performance. The results for our two-way neural network are on a par with Erk et al's (2010) similaritybased approach, while our three-way neural network slightly outperforms the tensor-based factorization model (Van de Cruys, 2009) for multi-way selectional preference induction.…”
Section: Discussionsupporting
confidence: 56%
“…Our model computes selectional preference scores for the test set in a matter of seconds, whereas for Erk et al's model, we ended up sampling from the test set, as computing preference values for the complete test set proved prohibitively expensive. Table 4 compares the results of our neural network architecture for three-way selectional preference acquisition to the results of the tensor-based factorization method (Van de Cruys, 2009 The results indicate that the neural network approach slightly outperforms the tensor-based factorization method. Again the model difference is sta-tistically significant (paired t-test, p < 0.01).…”
Section: Two-way Modelmentioning
confidence: 99%
“…For instance, ongoing experiments indicate that the same parameters apply when Lin's similarity is replaced by cosine. Finally, we would like to compare the proposed heuristics with more sophisticated filtering strategies like singular value decomposition (Landauer and Dumais, 1997) and non-negative matrix factorization (Van de Cruys, 2009). …”
Section: Discussionmentioning
confidence: 99%
“…We would like to thank the support of projects CAPES/COFECUB 707/11, PNPD 2484/2009, FAPERGS-INRIA 1706-2551/13-7, CNPq 312184/2012-3, 551964/2011-1, 482520/2012-4 and 312077/2012 …”
Section: Acknowledgmentsmentioning
confidence: 99%
“…Non-negative tensor factorization models have also been applied to other language processing applications, including subject-verb-object selectional preference induction [9] and learning semantic word similarity [10]. Without drawing the connection to low rank tensors, Lowd and Domingos [11] propose Naive Bayes models for estimating arbitrary probability distributions that can be seen as a generalization of (6).…”
Section: A Modelmentioning
confidence: 99%