2020
DOI: 10.1080/14697688.2020.1817974
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning volatility: a deep neural network perspective on pricing and calibration in (rough) volatility models

Abstract: We present a neural network based calibration method that performs the calibration task within a few milliseconds for the full implied volatility surface. The framework is consistently applicable throughout a range of volatility models-including the rough volatility family-and a range of derivative contracts. The aim of neural networks in this work is an off-line approximation of complex pricing functions, which are difficult to represent or time-consuming to evaluate by other means. We highlight how this pers… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
72
1
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 87 publications
(76 citation statements)
references
References 56 publications
2
72
1
1
Order By: Relevance
“…Since and i are homeomorphisms and since φ i is continuous then ψ i is well-defined and continuous. Moreover, analogously to (15) we compute that ψ n i (g i ) n∈N is dense in L 2 (R). Since L 2 (R) is a complete separable metric space with no isolated points and ψ i is continuous self-map of L 2 (R) for which there is a vectorg i ∈ L 2 (R) such that the set of iterates {ψ n i (g i )} n∈N is dense in L 2 (R) then Birkhoff Transitivity Theorem, see the formulation of [74,Theorem 1.16], implies that for every pair of non-empty open subsetsŨ,Ṽ ⊆ L 2 (R) there is some nŨ ,Ṽ satisfying φ nŨ ,Ṽ (Ũ) ∩Ṽ = ∅.…”
Section: (Iii) Decomposition Of Uap Via Topologically Transitive Dynamicsmentioning
confidence: 99%
See 1 more Smart Citation
“…Since and i are homeomorphisms and since φ i is continuous then ψ i is well-defined and continuous. Moreover, analogously to (15) we compute that ψ n i (g i ) n∈N is dense in L 2 (R). Since L 2 (R) is a complete separable metric space with no isolated points and ψ i is continuous self-map of L 2 (R) for which there is a vectorg i ∈ L 2 (R) such that the set of iterates {ψ n i (g i )} n∈N is dense in L 2 (R) then Birkhoff Transitivity Theorem, see the formulation of [74,Theorem 1.16], implies that for every pair of non-empty open subsetsŨ,Ṽ ⊆ L 2 (R) there is some nŨ ,Ṽ satisfying φ nŨ ,Ṽ (Ũ) ∩Ṽ = ∅.…”
Section: (Iii) Decomposition Of Uap Via Topologically Transitive Dynamicsmentioning
confidence: 99%
“…Since then the universal approximation capabilities, of a limited number of neural network architectures, such as the feed-forward, residual, and convolutional neural networks has been solidified as a cornerstone of their approximation success. This, coupled with the numerous hardware advances have led neural networks to find ubiquitous use in a number of areas, ranging from biology, see [7,8], to computer vision and imaging, see [9,10], and to mathematical finance, see [11][12][13][14][15]. As a result, a variety of neural network architectures have emerged with the common thread between them being that they describe an algorithmically generated set of complicated functions built by combining elementary functions in some manner.…”
Section: Introductionmentioning
confidence: 99%
“…Because of the complexity of the model (due to the state-dependent volatility and the infinite-dimensional setting), it is in general not possible to derive closed pricing formulas. To avoid time consuming numerical methods which render calibration almost impossible, in Paper IV we propose a machine-learning approach: we adapt the strategy presented in [68] and train a neural network which approximates option prices as a function of the HJM model parameters. This step is costly but off-line, meaning that no market data is used for training.…”
Section: The Hjm Approachmentioning
confidence: 99%
“…( 2019 ), Horvath et al. ( 2021 ) and Stone ( 2020 ) . The general idea is the acceleration of the instrument valuation via the application of a neural network.…”
Section: Introductionmentioning
confidence: 99%