“…In particular, the results in such articles show that deep ANNs have the capacity to overcome the curse of dimensionality in the approximation of certain target function classes in the sense that the number of parameters of the approximating ANNs grows at most polynomially in the dimension d ∈ N of the target functions under considerations. For example, we refer to Elbrächter et al [15], Jentzen et al [33], Gonon et al [20,21], Grohs et al [22,23,25], Kutyniok et al [43], Reisinger & Zhang [49], Beneventano et al [6], Berner et al [7], Hornung et al [31], Hutzenthaler et al [32], and the overview articles Beck et al [4] and E et al [13] for such high-dimensional ANN approximation results in the numerical approximation of solutions of PDEs and we refer to Barron [1][2][3], Jones [34], Girosi & Anzellotti [19], Donahue et al [12], Gurvits & Koiran [28], Kůrková et al [39][40][41][42], Kainen et al [35,36], Klusowski & Barron [38], Li et al [45], and Cheridito et al [9] for such high-dimensional ANN approximation results in the numerical approximation of certain specific target function classes independent of solutions of PDEs (cf., e.g., also Maiorov & Pinkus [46], Pinkus [48], Guliyev & Ismailov [26], Petersen & Voigtlaender [47], and Bölcskei et al [8] for related results). In the proofs of several of the above named high-dimensional approximation results it is crucial that the involved ANNs ar...…”