“…Branching diffusion approximation methods are also in the case of certain nonlinear PDEs as efficient as plain vanilla Monte Carlo approximations in the case of linear PDEs, but the error analysis only applies in the case where the time horizon T ∈ (0, ∞) and the initial condition, respectively, are sufficiently small and branching diffusion approximation methods fail to converge in the case where the time horizon T ∈ (0, ∞) exceeds a certain threshold (cf., e.g., [41,Theorem 3.12]). For MLP approximation methods it has been recently shown in [4,45,46] that such algorithms do indeed overcome the curse of dimensionality for certain classes of gradient-independent PDEs. Numerical simulations for deep learning based approximation methods for nonlinear PDEs in high dimensions are very encouraging (see, e.g., the above named references [1][2][3][6][7][8]13,14,16,17,21,24,25,31,[34][35][36]40,43,48,50,52,[54][55][56][57][58]60,62,63]) but so far there is only partial error analysis available for such algorithms (which, in turn, is strongly based on the above-mentioned error analysis for the MLP approximation method; cf.…”