2015
DOI: 10.1080/10556788.2015.1009977
|View full text |Cite
|
Sign up to set email alerts
|

Newton-based optimization for Kullback–Leibler nonnegative tensor factorizations

Abstract: Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process (e.g. count data), which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback-Leibler divergence function. We propose nove… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
61
1

Year Published

2015
2015
2024
2024

Publication Types

Select...
8
2

Relationship

1
9

Authors

Journals

citations
Cited by 51 publications
(62 citation statements)
references
References 39 publications
0
61
1
Order By: Relevance
“…Other variants of C exist by restructuring of the factors or their constraints to accommodate diverse situations, such as INDSCAL [18], CANDELINC [19], PARAFAC2 [48,109], and DEDI-COM [47]. Many C methods have been proposed in a broad area of research, such as Alternating Least Squares (ALS) based methods [47,68,69,74], block coordinate descent (BCD) based methods [88,93], Gradient Descent based methods [11,113,128,138], quasi-Newton and Nonlinear Least Squares (NLS) based methods [22,45,57,117,138,145,154], alternating optimization (AO) with the alternating direction method of multipliers (ADMM) based methods [13,124], exact line search based methods [112,137], and randomized/sketching methods [9,21,104,115,136,148]. Sparse C comes from two aspects: the sparse tensor from applications [7, 22-24, 65, 70, 74, 83, 84, 86, 89, 110, 113, 121, 126, 129, 130] and the constrained sparse factors from some C models [50,54,106].…”
Section: Tensor Methodsmentioning
confidence: 99%
“…Other variants of C exist by restructuring of the factors or their constraints to accommodate diverse situations, such as INDSCAL [18], CANDELINC [19], PARAFAC2 [48,109], and DEDI-COM [47]. Many C methods have been proposed in a broad area of research, such as Alternating Least Squares (ALS) based methods [47,68,69,74], block coordinate descent (BCD) based methods [88,93], Gradient Descent based methods [11,113,128,138], quasi-Newton and Nonlinear Least Squares (NLS) based methods [22,45,57,117,138,145,154], alternating optimization (AO) with the alternating direction method of multipliers (ADMM) based methods [13,124], exact line search based methods [112,137], and randomized/sketching methods [9,21,104,115,136,148]. Sparse C comes from two aspects: the sparse tensor from applications [7, 22-24, 65, 70, 74, 83, 84, 86, 89, 110, 113, 121, 126, 129, 130] and the constrained sparse factors from some C models [50,54,106].…”
Section: Tensor Methodsmentioning
confidence: 99%
“…On the other hand, postulating a Poisson distribution for the data turns out to be a more realistic assumption, which implies the use of the KL-Divergence as an objective function. Chi and Kolda [2012] demonstrate the effectiveness of this assumption, and Hansen et al [2015] develop efficient algorithms.…”
Section: Handling Missing Valuesmentioning
confidence: 98%
“…However, ALS suffers from some limitations; for example, ALS may converge to a local minimum and the memory consumption may explode when the scale of tensor is large. Nonlinear optimization approach is another option to obtain the tensor factorization, such as nonlinear conjugate gradient method [39], Newton based optimization [40], randomized block sampling method [41], and stochastic gradient descent [42]. In this paper, we adopt a stochastic gradient descent algorithm with Tikhonov regularization item loss function to process the tensor CP decomposition based clustering.…”
Section: Tensor Factorizationmentioning
confidence: 99%