2014
DOI: 10.1007/978-3-662-44845-8_22
|View full text |Cite
|
Sign up to set email alerts
|

Scalable Nonnegative Matrix Factorization with Block-wise Updates

Abstract: Nonnegative Matrix Factorization (NMF) has been applied with great success to many applications. As NMF is applied to massive datasets such as web-scale dyadic data, it is desirable to leverage a cluster of machines to speed up the factorization. However, it is challenging to efficiently implement NMF in a distributed environment. In this paper, we show that by leveraging a new form of update functions, we can perform local aggregation and fully explore parallelism. Moreover, under the new form of update funct… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
4
2
2

Relationship

2
6

Authors

Journals

citations
Cited by 24 publications
(21 citation statements)
references
References 23 publications
0
21
0
Order By: Relevance
“…The general techniques of incremental updates have shown efficiency in many algorithms, such as Nonnegative Matrix Factorization [27] and Expectation-Maximization [28]. In this section, we present an incremental update mechanism for BP algorithms, referred to as an incremental-update approach.…”
Section: Incremental Updatesmentioning
confidence: 99%
“…The general techniques of incremental updates have shown efficiency in many algorithms, such as Nonnegative Matrix Factorization [27] and Expectation-Maximization [28]. In this section, we present an incremental update mechanism for BP algorithms, referred to as an incremental-update approach.…”
Section: Incremental Updatesmentioning
confidence: 99%
“…Assuming that the objective function Ψ( Y || A X ) is expressed by an additive function, eg, the Bregman divergence, then we have: normalΨfalse(bold-italicYfalse‖bold-italicAbold-italicXfalse)=m=1Mn=1NnormalΨfalse(Ymnfalse‖AmXnfalse). Several optimization strategies aim to perform blockwise updates via the sequential minimization of Ψ( Y || A X ) with respect to the corresponding blocks A m or X n . In the next subsection, we propose a different approach to blockwise updates based on three‐step minimization.…”
Section: Distributed Nmfmentioning
confidence: 99%
“…There are several recent distributed NMF algorithms in the literature [19,6,32,20]. Liu et al propose running Multiplicative Update (MU) for KL divergence, squared loss, and "exponential" loss functions [20].…”
Section: Surveymentioning
confidence: 99%
“…Using similar approaches, Liao et al implement an open source Hadoop-based MU algorithm and study its scalability on large-scale biological data sets [19]. Also, Yin, Gao, and Zhang present a scalable NMF that can perform frequent updates, which aim to use the most recently updated data [32]. Similarly Faloutsos et al propose a distributed, scalable method for decomposing matrices, tensors, and coupled data sets through stochastic gradient descent on a variety of objective functions [6].…”
Section: Surveymentioning
confidence: 99%