2021
DOI: 10.48550/arxiv.2110.05421
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The One Step Malliavin scheme: new discretization of BSDEs implemented with deep learning regressions

Balint Negyesi,
Kristoffer Andersson,
Cornelis W. Oosterlee

Abstract: A novel discretization is presented for forward-backward stochastic differential equations (FBSDE) with differentiable coefficients, simultaneously solving the BSDE and its Malliavin sensitivity problem. The control process is estimated by the corresponding linear BSDE driving the trajectories of the Malliavin derivatives of the solution pair, which implies the need to provide accurate Γ estimates. The approximation is based on a merged formulation given by the Feynman-Kac formulae and the Malliavin chain rule… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 35 publications
0
3
0
Order By: Relevance
“…As for the LSMC method, these kinds of algorithms are not easily applied to coupled FBSDE. Algorithms of this type can be found in e.g., [42,41,43] and with error analysis [44]. For an overview of machine learning algorithms for approximation of PDE we refer to [45].…”
Section: Introductionmentioning
confidence: 99%
“…As for the LSMC method, these kinds of algorithms are not easily applied to coupled FBSDE. Algorithms of this type can be found in e.g., [42,41,43] and with error analysis [44]. For an overview of machine learning algorithms for approximation of PDE we refer to [45].…”
Section: Introductionmentioning
confidence: 99%
“…Our second method, called Pathwise differential learning, considers furthermore the derivative of this martingale representation, computed by automatic differentiation, which gives another loss function to be minimized in order to train neural networks for approximating the value function and its first and second derivatives. Such differential representation has been also considered in the recent paper [22] for designing a deep learning scheme with one-step loss functions as in the deep backward approach in [16] for solving forward backward SDEs with new estimation and error control of the Z process. Actually, the addition of this derivative loss function permits a better approximation of the terminal condition of the PDE and improves the overall approximation of the PDE solution's value and derivatives on the entire domain.…”
Section: Introductionmentioning
confidence: 99%
“…As for the LSMC method, these kinds of algorithms are not easily applied to coupled FBSDEs. Algorithms of this type can be found in e.g., [94,95,96] and with error analysis [97,98]. For an overview of machine learning algorithms for approximation of PDEs, we refer to [99].…”
Section: Introductionmentioning
confidence: 99%