2006
DOI: 10.1007/s10208-005-0179-9
|View full text |Cite
|
Sign up to set email alerts
|

Trust-Region Methods on Riemannian Manifolds

Abstract: A general scheme for trust-region methods on Riemannian manifolds is proposed and analyzed. Among the various approaches available to (approximately) solve the trust-region subproblems, particular attention is paid to the truncated conjugate-gradient technique. The method is illustrated on problems from numerical linear algebra.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
434
0

Year Published

2006
2006
2024
2024

Publication Types

Select...
6
3

Relationship

2
7

Authors

Journals

citations
Cited by 390 publications
(436 citation statements)
references
References 66 publications
2
434
0
Order By: Relevance
“…Optimization on manifolds finds applications in two broad classes of situations: classical equality-constrained optimization problems where the constraints specify a submanifold of R n ; and problems where the objective function has continuous invariance properties that we want to eliminate for various reasons, e.g., efficiency, consistency, applicability of certain convergence results, and avoiding failure of certain algorithms due to degeneracy. As a result, the generalization to manifolds of algorithms for unconstrained optimization in R n can yield useful and efficient numerical methods; see, e.g., recent work on Riemannian trust-region methods [2] and other methods mentioned in [3]. Since BFGS is one of the classical methods for unconstrained optimization (see [7,10]), it is natural that its generalization be a topic of interest.…”
Section: Introductionmentioning
confidence: 99%
“…Optimization on manifolds finds applications in two broad classes of situations: classical equality-constrained optimization problems where the constraints specify a submanifold of R n ; and problems where the objective function has continuous invariance properties that we want to eliminate for various reasons, e.g., efficiency, consistency, applicability of certain convergence results, and avoiding failure of certain algorithms due to degeneracy. As a result, the generalization to manifolds of algorithms for unconstrained optimization in R n can yield useful and efficient numerical methods; see, e.g., recent work on Riemannian trust-region methods [2] and other methods mentioned in [3]. Since BFGS is one of the classical methods for unconstrained optimization (see [7,10]), it is natural that its generalization be a topic of interest.…”
Section: Introductionmentioning
confidence: 99%
“…The minimization of the models is done via an iterative process, which is referred to as the inner iteration, to distinguish it with the principal outer iteration. We present here the process in a way that does not require a background in differential geometry; we refer to [13] for the mathematical foundations of the technique.…”
Section: Riemannian Trust-region Methods With Newton Modelmentioning
confidence: 99%
“…Methods of Riemannian Optimization (RO) have been recently a subject of great interest in data analytics communities; see, for example, [Absil et al, 2007, 2008, Bento et al, 2016, Bonnabel, 2013, Cambier and Absil, 2016. Some optimization problems discussed in previous sections, can be naturally formulated on Riemannian manifolds, so as to directly benefit from the underlying geometric structures that can be exploited to significantly reduce the cost of obtaining solutions.…”
Section: Riemannian Optimization For Low-rank Tensor Manifoldsmentioning
confidence: 99%