2013
DOI: 10.1093/imanum/drt031
|View full text |Cite
|
Sign up to set email alerts
|

Energy-diminishing integration of gradient systems

Abstract: For gradient systems in Euclidean space or on a Riemannian manifold the energy decreases monotonically along solutions. Algebraically stable Runge-Kutta methods are shown to also reduce the energy in each step under a mild step-size restriction. In particular, Radau IIA methods can combine energy monotonicity and damping in stiff gradient systems. Discrete-gradient methods and averaged vector field collocation methods are unconditionally energy-diminishing, but cannot achieve damping for very stiff gradient sy… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
57
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 61 publications
(57 citation statements)
references
References 30 publications
0
57
0
Order By: Relevance
“…In this regard, there already exists a considerable range of choices. Some RungeKutta methods have been proved to preserve a Lyapunov function [38], but their computational cost should be carefully explored, because limitations on the step size are required and implicit methods result. Also, projection methods [39] are promising, but they incur in significant computational cost, because a nonlinear equation must be solved at every step, even though they are formally explicit.…”
Section: Discussionmentioning
confidence: 99%
“…In this regard, there already exists a considerable range of choices. Some RungeKutta methods have been proved to preserve a Lyapunov function [38], but their computational cost should be carefully explored, because limitations on the step size are required and implicit methods result. Also, projection methods [39] are promising, but they incur in significant computational cost, because a nonlinear equation must be solved at every step, even though they are formally explicit.…”
Section: Discussionmentioning
confidence: 99%
“…We have so far verified the second order convergence for the YBABY method, and its preservation of the Poisson structure for the reversible dynamics as well as the conformal symplecticity. However, it is unclear under what conditions there exist a modified energy and an associated friction matrix as in (15). To this end, in what follows we modify the irreversible part of the system as discussed at the beginning of this section.…”
Section: The Ybaby Methodsmentioning
confidence: 99%
“…In this section, we discuss the construction of GENERIC integrators based on splitting the reversible and irreversible parts of the system. In order to satisfy the modified degeneracy condition (15), we explore the possibility of adjusting the irreversible part using a modified friction matrix that corresponds to a modified energy associated with the symplectic integrator used for the reversible part.…”
Section: Construction Of Split Generic Integrators Based On Reversiblmentioning
confidence: 99%
“…Here, we have just replaced the gradient in (3) with the discrete gradient. However, there is a significant difference between (3) and (7) from the viewpoint of optimisation problems. For the solution to (7), the following discrete dissipation property holds:…”
Section: Unconditioned Optimisation Problem For a Strictly Convex Objmentioning
confidence: 99%
“…Let a differentiable function f : R n → R be strictly convex and coercive. Then, the sequence {x x x (k) } ∞ k=0 obtained by the iteration (7) converges to the unique minimiser of the function f for any initial vector x x x 0 , i.e.,…”
Section: Unconditioned Optimisation Problem For a Strictly Convex Objmentioning
confidence: 99%