2018
DOI: 10.1137/18m1171965
|View full text |Cite
|
Sign up to set email alerts
|

Convergence of Implicit Schemes for Hamilton--Jacobi--Bellman Quasi-Variational Inequalities

Abstract: In [Azimzadeh, P., and P. A. Forsyth. "Weakly chained matrices, policy iteration, and impulse control." SIAM J. Num. Anal. 54.3 (2016): 1341-1364], we outlined the theory and implementation of computational methods for implicit schemes for Hamilton-Jacobi-Bellman quasi-variational inequalities (HJBQVIs). No convergence proofs were given therein. This work closes the gap by giving rigorous proofs of convergence. We do so by introducing the notion of nonlocal consistency and appealing to a Barles-Souganidis type… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

2
46
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(48 citation statements)
references
References 30 publications
2
46
0
Order By: Relevance
“…Assumption 1 will only be used in Section 3 to quantify the regularization errors (not for the well-posedness or the monotone convergence of the regularization procedures). It is well-known that a concave function can be equivalently represented as the infimum of a family of affine functions, i.e., F i (u) = inf α∈Ai B i (α)u − b i (α) for some set A i and coefficients B i : A i → R N d×N d and b i : A i → R N d , hence our error estimates apply to the HJBQVIs studied in [7,23,2,12,1]. However, our setting significantly extends the classical HJBQVIs in the following important aspects: (1) F i can depend on all components of the solutions to the switching systems, (2) the control set A i can be non-compact and coefficients B i , b i can be discontinuous, (3) b i does not necessarily have a unique sign.…”
Section: Penalty Approximations Of Qvismentioning
confidence: 98%
See 1 more Smart Citation
“…Assumption 1 will only be used in Section 3 to quantify the regularization errors (not for the well-posedness or the monotone convergence of the regularization procedures). It is well-known that a concave function can be equivalently represented as the infimum of a family of affine functions, i.e., F i (u) = inf α∈Ai B i (α)u − b i (α) for some set A i and coefficients B i : A i → R N d×N d and b i : A i → R N d , hence our error estimates apply to the HJBQVIs studied in [7,23,2,12,1]. However, our setting significantly extends the classical HJBQVIs in the following important aspects: (1) F i can depend on all components of the solutions to the switching systems, (2) the control set A i can be non-compact and coefficients B i , b i can be discontinuous, (3) b i does not necessarily have a unique sign.…”
Section: Penalty Approximations Of Qvismentioning
confidence: 98%
“…Due to the fact that the control j takes only d distinct values, we can apply the penalty term finitely many times (once per value). This is not directly possible in the framework of [1,2], where the number of attainable values for the control in the intervention operator grows unbounded as the meshing parameter in the approximation of an infinite control set approaches zero (see [19] for an extension of such a penalty scheme to general intervention operators with an infinite number of control values: the summation is replaced by an integral, which might subsequently be approximated by quadrature).…”
Section: Penalty Approximations Of Qvismentioning
confidence: 99%
“…This suggests that the penalty schemes are significantly more efficient than the direct control scheme for solving large-scale discrete QVIs, as pointed out in [3]. In practice, instead of solving the penalized equation (7.2) with a fixed penalty parameter ρ, we shall construct a convergent approximation to the solution of the QVI (7.1) based on the penalized solutions, by letting 1/ρ and h tend to zero simultaneously (see also [3,1]). The first order convergence of both the penalization error and the discretization error (see Figure 7.1 and Table 7.1) suggests us to take ρ = CN , where the constant C = 1/16 was found to achieve the optimal balance between the penalization error and the discretization error.…”
Section: Numerical Experimentsmentioning
confidence: 99%
“…Properties (1) and (2) follow directly from the structure of M i . Property (3) is an analogue of [1,Lemma 12] to the present concave intervention operator M i and compact set Z i , whose proof will be given in Appendix A for completeness.…”
mentioning
confidence: 98%
See 1 more Smart Citation