2006
DOI: 10.1016/j.csda.2004.11.013
|View full text |Cite
|
Sign up to set email alerts
|

A comparison of algorithms for fitting the PARAFAC model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
351
0
8

Year Published

2008
2008
2021
2021

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 352 publications
(363 citation statements)
references
References 32 publications
4
351
0
8
Order By: Relevance
“…To improve efficiency, various numerical techniques have been tested, such as ridge regression (Rayens & Mitchell, 1997), line search (Bro, 1998;Tomasi, 2006;Rajih et al, 2008), and gradient-based methods (Franc, 1992;Paatero, 1999;Tomasi, 2006). Although performance of an algorithm depends on the data and the fit model, the ALS algorithm in Appendix A seems to provide the most accurate Parafac solution at the cost of slow convergence (see Faber et al, 2003;Tomasi & Bro, 2006). Andersson and Bro (2000) provide an excellent open-source MATLAB toolbox with techniques for fitting various multimode component models, including the ALS algorithm for fitting the Parafac model.…”
Section: Fitting the Pca And Parafac Modelsmentioning
confidence: 99%
“…To improve efficiency, various numerical techniques have been tested, such as ridge regression (Rayens & Mitchell, 1997), line search (Bro, 1998;Tomasi, 2006;Rajih et al, 2008), and gradient-based methods (Franc, 1992;Paatero, 1999;Tomasi, 2006). Although performance of an algorithm depends on the data and the fit model, the ALS algorithm in Appendix A seems to provide the most accurate Parafac solution at the cost of slow convergence (see Faber et al, 2003;Tomasi & Bro, 2006). Andersson and Bro (2000) provide an excellent open-source MATLAB toolbox with techniques for fitting various multimode component models, including the ALS algorithm for fitting the Parafac model.…”
Section: Fitting the Pca And Parafac Modelsmentioning
confidence: 99%
“…Further, noise E is added to X pure , in the same way as is suggested in Tomasi and Bro [29] to obtain the data cube X ¼ X pure þ E E IÂJK ¼ Noise% 100 À Noise% X IÂJK pure FẼ IÂJK (8Þ Here,Ẽ IÂJK is generated from a normal distribution bNð0; SẼÞ and normalized to the Frobenius norm of 1. We have putẼ equal to 10 * I, with I the identity matrix.…”
Section: Appendixmentioning
confidence: 99%
“…It was introduced in [3,4]. The CPD and related decompositions have numerous applications [5,6,7,8,9,10,11,12], and various iterative CPD algorithms are available [13]. Unfortunately, for R ≥ 2 the problem may not have an optimal solution because the set S R (I, J, K) is not closed [14].…”
Section: Introductionmentioning
confidence: 99%