2020
DOI: 10.3390/jrfm13070158
|View full text |Cite
|
Sign up to set email alerts
|

Pricing and Hedging American-Style Options with Deep Learning

Abstract: In this paper we introduce a deep learning method for pricing and hedging American-style options. It first computes a candidate optimal stopping policy. From there it derives a lower bound for the price. Then it calculates an upper bound, a point estimate and confidence intervals. Finally, it constructs an approximate dynamic hedging strategy. We test the approach on different specifications of a Bermudan max-call option. In all cases it produces highly accurate prices and dynamic hedging strategies with small… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
42
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 54 publications
(43 citation statements)
references
References 24 publications
0
42
1
Order By: Relevance
“…For example, such approaches include approximating the Snell envelope or continuation values (cf., e.g., [89,5,28,75]), computing optimal exercise boundaries (cf., e.g., [2]) and dual methods (cf., e.g., [80,51]). Whereas in [51,66] artificial neural networks with one hidden layer were employed to approximate continuation values, more recently numerical approximation methods for American and Bermudan option pricing that are based on deep learning were introduced, cf., for example, [86,85,9,42,10,72,30]. More precisely, in [86,85] deep neural networks are used to approximately solve the corresponding obstacle partial differential equation problem, in [9] the corresponding optimal stopping problem is tackled directly with a deep learning-based algorithm, [42] applies an extension of the deep backward stochastic differential equation (BSDE) solver from [50,37] to the corresponding reflected BSDE problem, [30] suggests a different deep learning-based algorithm that relies on discretising BSDEs and in [10,72] deep neural network-based variants of the classical algorithm introduced by Longstaff & Schwartz [75] are examined.…”
Section: Introductionmentioning
confidence: 99%
“…For example, such approaches include approximating the Snell envelope or continuation values (cf., e.g., [89,5,28,75]), computing optimal exercise boundaries (cf., e.g., [2]) and dual methods (cf., e.g., [80,51]). Whereas in [51,66] artificial neural networks with one hidden layer were employed to approximate continuation values, more recently numerical approximation methods for American and Bermudan option pricing that are based on deep learning were introduced, cf., for example, [86,85,9,42,10,72,30]. More precisely, in [86,85] deep neural networks are used to approximately solve the corresponding obstacle partial differential equation problem, in [9] the corresponding optimal stopping problem is tackled directly with a deep learning-based algorithm, [42] applies an extension of the deep backward stochastic differential equation (BSDE) solver from [50,37] to the corresponding reflected BSDE problem, [30] suggests a different deep learning-based algorithm that relies on discretising BSDEs and in [10,72] deep neural network-based variants of the classical algorithm introduced by Longstaff & Schwartz [75] are examined.…”
Section: Introductionmentioning
confidence: 99%
“…Additionally, the recent advances in machine learning has incentivized research in this particular area and allowed for extensions of these techniques to high-dimensional problems (cf. [KKT10], [BCJ19], [BCJ20], [GMZ20], [RW20]). The present article follows this stream of the literature and provides a variance-reduction technique that can be applied on top of numerous Monte Carlo based algorithms, therefore providing a powerful way to speed up these methods.…”
Section: Introductionmentioning
confidence: 99%
“…[GMZ20]) and (deep) neural network based stopping-time algorithms (cf. [BCJ19], [BCJ20]), respectively, with our JDOI variance reduction technique. Investigating these types of algorithms could be part of future research.…”
Section: Introductionmentioning
confidence: 99%
“…Motivated by the universal approximation theorems [8,9], nowadays ANNs are also being used to approximate solutions to ordinary differential equations (ODEs) or partial differential equations (PDEs) [5,[10][11][12]. Our contribution to this field consists of solving some PDEs that appear in computational finance applications with ANNs, following the unsupervised learning methodology introduced by [13] and refined in [14].…”
Section: Introductionmentioning
confidence: 99%
“…The authors in [14] extended the class of PDE solutions that may be approximated by these unsupervised learning methods, by translating the PDEs to a suitably weighted minimization problem for the ANNs to solve. Moreover, in [8,9] American options were formulated as optimal stopping problems, where optimal stopping decisions were learned and so-called ANN regression was used to estimate the continuation values. This is an example of the unsupervised learning approach to solve a specific formulation of options with early-exercise features.…”
Section: Introductionmentioning
confidence: 99%