2021
DOI: 10.1007/s00780-021-00462-7
|View full text |Cite
|
Sign up to set email alerts
|

Deep ReLU network expression rates for option prices in high-dimensional, exponential Lévy models

Abstract: We study the expression rates of deep neural networks (DNNs for short) for option prices written on baskets of $d$ d risky assets whose log-returns are modelled by a multivariate Lévy process with general correlation structure of jumps. We establish sufficient conditions on the characteristic triplet of the Lévy process $X$ X that ensure $\varepsilon $ ε error of DNN expressed option prices with DNNs of size that grows… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(14 citation statements)
references
References 37 publications
0
14
0
Order By: Relevance
“…In the last several years, there has been a number of interesting papers that addressed the role of depth and architecture of deep neural networks in approximating functions that possess special regularity properties such as analytic functions [20,38], differentiable functions [45,52], oscillatory functions [29], functions in Sobolev or Besov spaces [1,27,30,53]. High-dimensional approximations by deep neural networks have been studied in [39,48,16,17], and their applications to high-dimensional PDEs in [47,21,43,31,25,26,28]. Most of these papers used deep ReLU (Rectified Linear Unit) neural networks since the rectified linear unit is a simple and preferable activation function in many applications.…”
Section: Introductionmentioning
confidence: 99%
“…In the last several years, there has been a number of interesting papers that addressed the role of depth and architecture of deep neural networks in approximating functions that possess special regularity properties such as analytic functions [20,38], differentiable functions [45,52], oscillatory functions [29], functions in Sobolev or Besov spaces [1,27,30,53]. High-dimensional approximations by deep neural networks have been studied in [39,48,16,17], and their applications to high-dimensional PDEs in [47,21,43,31,25,26,28]. Most of these papers used deep ReLU (Rectified Linear Unit) neural networks since the rectified linear unit is a simple and preferable activation function in many applications.…”
Section: Introductionmentioning
confidence: 99%
“…In particular, the results in such articles show that deep ANNs have the capacity to overcome the curse of dimensionality in the approximation of certain target function classes in the sense that the number of parameters of the approximating ANNs grows at most polynomially in the dimension d ∈ N of the target functions under considerations. For example, we refer to Elbrächter et al [15], Jentzen et al [33], Gonon et al [20,21], Grohs et al [22,23,25], Kutyniok et al [43], Reisinger & Zhang [49], Beneventano et al [6], Berner et al [7], Hornung et al [31], Hutzenthaler et al [32], and the overview articles Beck et al [4] and E et al [13] for such high-dimensional ANN approximation results in the numerical approximation of solutions of PDEs and we refer to Barron [1][2][3], Jones [34], Girosi & Anzellotti [19], Donahue et al [12], Gurvits & Koiran [28], Kůrková et al [39][40][41][42], Kainen et al [35,36], Klusowski & Barron [38], Li et al [45], and Cheridito et al [9] for such high-dimensional ANN approximation results in the numerical approximation of certain specific target function classes independent of solutions of PDEs (cf., e.g., also Maiorov & Pinkus [46], Pinkus [48], Guliyev & Ismailov [26], Petersen & Voigtlaender [47], and Bölcskei et al [8] for related results). In the proofs of several of the above named high-dimensional approximation results it is crucial that the involved ANNs ar...…”
Section: Introductionmentioning
confidence: 99%
“…Accordingly, there is currently a strong interest in the scientific community to understand the success of deep learning. Theoretical deep learning papers usually focus on different aspects of deep learning algorithms such as, for example, optimization methods and training algorithms (cf., e.g., [5,8,11,23,26,30]), generalization errors of ANNs (cf., e.g., [1,3,4,10,22,27]), or the capacity of ANNs to approximate various kinds of functions (cf., e.g., [3,6,12,13,15,16,25,27,29,31,32]).…”
Section: Introductionmentioning
confidence: 99%
“…To the best of our knowledge, Theorem 1.3 is the only theorem about approximation capacities of ANNs for PDEs which measures the approximation errors of ANNs based on a supremal condition on the entire euclidean space. Most papers in the scientific literature consider approximation errors in the L p -sense (cf., e.g., [16,21,32]) and some in the supremum sense but on a compact set (cf., e.g., [2,12,13,15]). The remainder of this article is organized as follows.…”
Section: Introductionmentioning
confidence: 99%