2020
DOI: 10.1007/s10589-020-00214-x
|View full text |Cite
|
Sign up to set email alerts
|

Consistent treatment of incompletely converged iterative linear solvers in reverse-mode algorithmic differentiation

Abstract: Algorithmic differentiation (AD) is a widely-used approach to compute derivatives of numerical models. Many numerical models include an iterative process to solve non-linear systems of equations. To improve efficiency and numerical stability, AD is typically not applied to the linear solvers. Instead, the differentiated linear solver call is replaced with hand-produced derivative code that exploits the linearity of the original call. In practice, the iterative linear solvers are often stopped prematurely to re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 30 publications
0
9
0
Order By: Relevance
“…It repeatedly applies the chain rule to the program sequence of elementary arithmetic operations and functions. The AD method has two operating modes, namely forward accumulation and reverse accumulation [99]. The two modes compute the gradient of the function with a seed vector that has the same number of the function inputs or outputs for forward accumulation and reverse accumulation, respectively [99].…”
Section: Central Finite Difference F(x) Figure 18 a Clarification Of The Finite Difference Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…It repeatedly applies the chain rule to the program sequence of elementary arithmetic operations and functions. The AD method has two operating modes, namely forward accumulation and reverse accumulation [99]. The two modes compute the gradient of the function with a seed vector that has the same number of the function inputs or outputs for forward accumulation and reverse accumulation, respectively [99].…”
Section: Central Finite Difference F(x) Figure 18 a Clarification Of The Finite Difference Methodsmentioning
confidence: 99%
“…The AD method has two operating modes, namely forward accumulation and reverse accumulation [99]. The two modes compute the gradient of the function with a seed vector that has the same number of the function inputs or outputs for forward accumulation and reverse accumulation, respectively [99]. The AD method was used to find the sensitivity of the electromagnetic force to different geometric parameters of a linear actuator in [100].…”
Section: Central Finite Difference F(x) Figure 18 a Clarification Of The Finite Difference Methodsmentioning
confidence: 99%
“…Both (7) and (11) are duality preserving with respect to tangent linearizations based on their respective definition of residual, provided the transposition of M is consistent.…”
Section: B Two Types Of Adjoint Fixed-pointmentioning
confidence: 99%
“…The final set shows the impact of exactly linearizing the residual operator at every stage of an RK scheme when the gradients are only computed at the first stage and then frozen throughout the analysis. All cases were run on an unstructured triangular mesh consisting of 4212 elements shown in Figure (1), in M = .7 flow with α = 2 o , with 2 Hickes-Henne bump functions [6] used as design variables to perturb the airfoil surface.…”
Section: Impact Of Appromixate Linearization On Sensitivity Accuracymentioning
confidence: 99%
“…Padway and Mavriplis [16] showed by numerical experiment that for an approximately linearized quasi-Newton fixed point iteration, convergence of the non-linear problem led to a decrease in error from the approximate linearization. The issue of inexact linearization of a linear system solve has been investigated previously to produce consistent automatic differentiation of linear system solves in segregated solvers [1]. These linearizations would be necessary in the "piggy-back" iterations of the one-shot adjoint method [4] when applied to implicit nonlinear solvers.…”
Section: Introductionmentioning
confidence: 99%