2012
DOI: 10.1007/978-3-642-33266-1_52
|View full text |Cite
|
Sign up to set email alerts
|

Selecting β-Divergence for Nonnegative Matrix Factorization by Score Matching

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 12 publications
0
9
0
Order By: Relevance
“…In this section we demonstrate the proposed method on various data types and learning tasks. First we provide the results on synthetic data, whose density is known, to compare the behavior of MTL, MEDAL and the score matching method [29]. Second, we illustrate the advantage of the EDA density over ED.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…In this section we demonstrate the proposed method on various data types and learning tasks. First we provide the results on synthetic data, whose density is known, to compare the behavior of MTL, MEDAL and the score matching method [29]. Second, we illustrate the advantage of the EDA density over ED.…”
Section: Methodsmentioning
confidence: 99%
“…(21). The loglikelihood and negative score matching objectives [29] on the same four datasets are shown. The estimates are consistent with the ground truth Gaussian and Poisson data.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…One of such metrics for the STFT spectra is Itakura-Saito divergence [21], and it was applied to nonnegative matrix factorization (NMF) [22,23] with successful results in music sound analysis [22]. β-divergence also provides scale-invariant distance, and it was applied to NMF as well [24,25]. However, in DNN learning, it is preferred to use simple metrics that are easy to differentiate to derive a learning algorithm and compute gradients efficiently.…”
Section: Loss Function Of Singing Voice Separation Modelsmentioning
confidence: 99%
“…The resulting algorithms have been successfully applied to various learning tasks such as part-based representation learning, clustering, bi-clustering, and graph matching [18][19][20][21][22][23][24][25][26][27][28][29][30].…”
mentioning
confidence: 99%