2008
DOI: 10.1137/07069239x
|View full text |Cite
|
Sign up to set email alerts
|

Nonnegative Matrix Factorization Based on Alternating Nonnegativity Constrained Least Squares and Active Set Method

Abstract: Abstract.The non-negative matrix factorization (NMF) determines a lower rank approximation of a matrixis given and nonnegativity is imposed on all components of the factors £ 7 ¥ 8 § © @ 9and £ 7 ¥ A 9 B © @. The NMF has attracted much attention for over a decade and has been successfully applied to numerous data analysis problems. In applications where the components of the data are necessarily nonnegative such as chemical concentrations in experimental results or pixels in digital images, the NMF provides a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
351
0

Year Published

2010
2010
2017
2017

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 509 publications
(353 citation statements)
references
References 22 publications
2
351
0
Order By: Relevance
“…Similar to the POD approach, W and A + are denoted as the basis matrix and coefficient matrix, respectively. The alternating non-negative least squares algorithm proposed in [15], which ensures the convergence of the minimization problem, is implemented in this paper. The prediction for non-negative outputs S a is computed similar to the POD-RBF method as S a ⇡ W · B + · F a .…”
Section: Pod-rbf and Nnmf-rbf Network For Fuzzy Datamentioning
confidence: 99%
“…Similar to the POD approach, W and A + are denoted as the basis matrix and coefficient matrix, respectively. The alternating non-negative least squares algorithm proposed in [15], which ensures the convergence of the minimization problem, is implemented in this paper. The prediction for non-negative outputs S a is computed similar to the POD-RBF method as S a ⇡ W · B + · F a .…”
Section: Pod-rbf and Nnmf-rbf Network For Fuzzy Datamentioning
confidence: 99%
“…Maximum number of iteration is very crucial in MU and AU algorithms since these algorithms are known to be very slow [2,7,8,9,10,16,18,20,21,22,23]. As shown by Lin [23], LS is very fast to minimize the objective for some first iterations, but then tends to become slower.…”
Section: Maximum Number Of Iterationmentioning
confidence: 99%
“…As shown in ref. [8,9,18], sparse NMF usually can give good results if α and/or β are rather small positive numbers.…”
Section: Determining α and βmentioning
confidence: 99%
See 1 more Smart Citation
“…A classical method for solving the NNLS problem is the active set method of Lawson and Hanson [20]; however, applying Lawson and Hanson's method directly to NNCP is extremely slow. Bro and De Jong [4] suggested an improved active-set method to solve the NNLS problems, and Ven Benthem and Keenan [28] further accelerated the active-set method, which was later utilized in NMF [14] and NNCP [15]. In Friedlander and Hatz [10], the NNCP subproblems are solved by a two-metric projected gradient descent method.…”
Section: Related Workmentioning
confidence: 99%