2020
DOI: 10.1137/20m1324338
|View full text |Cite
|
Sign up to set email alerts
|

Discretization of Linear Problems in Banach Spaces: Residual Minimization, Nonlinear Petrov--Galerkin, and Monotone Mixed Methods

Abstract: This work presents a comprehensive discretization theory for abstract linear operator equations in Banach spaces. The fundamental starting point of the theory is the idea of residual minimization in dual norms, and its inexact version using discrete dual norms. It is shown that this development, in the case of strictly-convex reflexive Banach spaces with strictly-convex dual, gives rise to a class of nonlinear Petrov-Galerkin methods and, equivalently, abstract mixed methods with monotone nonlinearity. Under t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
30
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1
1

Relationship

3
4

Authors

Journals

citations
Cited by 16 publications
(30 citation statements)
references
References 32 publications
0
30
0
Order By: Relevance
“…In some sense duality mappings are a suitable nonlinear replacement for the Riesz map. This becomes particularly evident in the context of residual minimization problems; see, for example, [26].…”
Section: Outline Of the Papermentioning
confidence: 97%
See 2 more Smart Citations
“…In some sense duality mappings are a suitable nonlinear replacement for the Riesz map. This becomes particularly evident in the context of residual minimization problems; see, for example, [26].…”
Section: Outline Of the Papermentioning
confidence: 97%
“…In the previous section we have heuristically constructed the linear functional ℓt hat allowed us to define a suitable test function. In [26,Section 2] the same idea is undertaken in a rather more abstract setting. The functional ℓ˜can also be defined…”
Section: Duality Mappingsmentioning
confidence: 99%
See 1 more Smart Citation
“…More recently, a novel approach to designing finite element methods in a very general Banach space setting has been introduced in [17]. This approach is rooted in the so-called Discontinuous Petrov-Galerkin (DPG) methods [18] and extends the concept of optimal test norms and functions from Hilbert spaces to more general Banach spaces yielding a scheme that can be interpreted as a non-linear Petrov-Galerkin method.…”
Section: Introductionmentioning
confidence: 99%
“…The Petrov-Galerkin method is equivalent to a Minimal-Residual formulation, as commonly studied in the context of DPG and optimal Petrov-Galerkin methods [3,4]. As is natural in deep learning, we use an artificial neural network to define the family of test spaces, whose parameters are learned from the data.…”
mentioning
confidence: 99%