2005
DOI: 10.1137/040604959
|View full text |Cite
|
Sign up to set email alerts
|

Algorithms for Numerical Analysis in High Dimensions

Abstract: Nearly every numerical analysis algorithm has computational complexity that scales exponentially in the underlying physical dimension. The separated representation, introduced previously, allows many operations to be performed with scaling that is formally linear in the dimension. In this paper we further develop this representation by (i) discussing the variety of mechanisms that allow it to be surprisingly efficient;(ii) addressing the issue of conditioning; (iii) presenting algorithms for solving linear sys… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
400
0

Year Published

2005
2005
2017
2017

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 318 publications
(401 citation statements)
references
References 37 publications
1
400
0
Order By: Relevance
“…These sequences correspond to situations where only the righthand sides are changing for a given dimension d. An efficient solution method is of primary interest in certain applications related to e.g. financial engineering, molecular biology or quantum dynamics [5,6]. In the numerical experiments reported here (performed in Matlab) we have used second order finite difference discretization schemes leading to sparse matrices with at most 2d+ 1 nonzero elements per row.…”
Section: A Numerical Illustrationmentioning
confidence: 99%
“…These sequences correspond to situations where only the righthand sides are changing for a given dimension d. An efficient solution method is of primary interest in certain applications related to e.g. financial engineering, molecular biology or quantum dynamics [5,6]. In the numerical experiments reported here (performed in Matlab) we have used second order finite difference discretization schemes leading to sparse matrices with at most 2d+ 1 nonzero elements per row.…”
Section: A Numerical Illustrationmentioning
confidence: 99%
“…One of the most popular minimization method for solving the lower rank approximation problem with fixed representation rank is the alternating least squares (ALS) algorithm. In [11,Harshman], the ALS method was applied for principle component analysis of order three tensors and in [1,2,Beylkin,Mohlenkamp] for tensors presented in the canonical format. Furthermore, the minimization problem was also solved by a Gauss-Newton method in [13,14,Paatero] and by Newton method in [12, Oseledets, Savost'yanov].…”
Section: Introductionmentioning
confidence: 99%
“…This representation is similar to the partitioned singular value decomposition considered in [16,17]. We note that both the separated and PLR representations are interesting on their own, with applications in other areas, e.g., computational quantum mechanics (see [15,18]). …”
Section: Introductionmentioning
confidence: 88%
“…where the transition matrix between the coefficients s il ands il has the block tridiagonal structure (17) and (18). Following the derivation in [11], we obtain (20) for l = 0, .…”
Section: Derivative Matrices With Boundary and Interface Conditionsmentioning
confidence: 99%