2021
DOI: 10.1109/jstsp.2020.3045911
|View full text |Cite
|
Sign up to set email alerts
|

Inexact Generalized Gauss–Newton for Scaling the Canonical Polyadic Decomposition With Non-Least-Squares Cost Functions

Abstract: The canonical polyadic decomposition (CPD) allows one to extract compact and interpretable representations of tensors. Several optimization-based methods exist to fit the CPD of a tensor for the standard least-squares (LS) cost function. Extensions have been proposed for more general cost functions such as β-divergences as well. For these non-LS cost functions, a generalized Gauss-Newton (GGN) method has been developed. This is a second-order method that uses an approximation of the Hessian of the cost functio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 57 publications
0
3
0
Order By: Relevance
“…Even though the GN algorithm is derived for least-squares problems, it can be generalized easily to accommodate other loss functions. Following general results in Schraudolph (2002) and for tensor decompositions (Vandecapelle et al, 2020;Vandecappelle et al, 2021), the generalized GN algorithm can be derived similarly to the strategy in Subsection 3.2. In the dogleg trust-region framework, the necessary GN direction p (d) r is derived starting from the linear system:…”
Section: Generalized Gauss-newton Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…Even though the GN algorithm is derived for least-squares problems, it can be generalized easily to accommodate other loss functions. Following general results in Schraudolph (2002) and for tensor decompositions (Vandecapelle et al, 2020;Vandecappelle et al, 2021), the generalized GN algorithm can be derived similarly to the strategy in Subsection 3.2. In the dogleg trust-region framework, the necessary GN direction p (d) r is derived starting from the linear system:…”
Section: Generalized Gauss-newton Algorithmmentioning
confidence: 99%
“…In comparison to prior work which used alternating least squares (ALS) scheme (Beylkin et al, 2009;Garcke, 2010) as means to train the model parameters, the GN algorithm is known to exhibit superior convergence properties (see e.g., Sorber et al, 2013;Vervliet and De Lathauwer, 2019). Building on the results for the GNbased computation of a CPD using alternative cost functions (Vandecappelle et al, 2021), we show that our algorithm can be altered to accommodate logistic cost functions which are more suitable for classification problems.…”
Section: Introductionmentioning
confidence: 95%
“…Many data science problems such as latent factor analysis have been solved by reformulating them as tensor decomposition problems [9][10][11][12]. An inexact Gauss-Newton algorithm has been proposed for scaling the CPD of large tensors with non-least-squares cost functions [13]. Moreover, generalized Gauss-Newton algorithm with its efficient parallel implementation has been proposed for tensor completion with generalized loss functions [14].…”
Section: Introductionmentioning
confidence: 99%