We derive fundamental lower bounds on the connectivity and the memory requirements of deep neural networks guaranteeing uniform approximation rates for arbitrary function classes in L 2 (R d ). In other words, we establish a connection between the complexity of a function class and the complexity of deep neural networks approximating functions from this class to within a prescribed accuracy. Additionally, we prove that our lower bounds are achievable for a broad family of function classes. Specifically, all function classes that are optimally approximated by a general class of representation systems-so-called affine systems-can be approximated by deep neural networks with minimal connectivity and memory requirements. Affine systems encompass a wealth of representation systems from applied harmonic analysis such as wavelets, ridgelets, curvelets, shearlets, α-shearlets, and more generally α-molecules. Our central result elucidates a remarkable universality property of neural networks and shows that they achieve the optimum approximation properties of all affine systems combined. As a specific example, we consider the class of α −1 -cartoon-like functions, which is approximated optimally by α-shearlets. We also explain how our results can be extended to the case of functions on low-dimensional immersed manifolds. Finally, we present numerical experiments demonstrating that the standard stochastic gradient descent algorithm generates deep neural networks providing close-to-optimal approximation rates. Moreover, these results indicate that stochastic gradient descent can actually learn approximations that are sparse in the representation systems optimally sparsifying the function class the network is trained on.Throughout the paper, we consider the case Φ : R d → R, i.e., N L = 1, which includes situations such as the classification and temperature prediction problem described above. We emphasize, however, that the general results of Sections 3, 4, and 5 are readily generalized to N L > 1.We denote the class of networks Φ : R d → R with exactly L layers, connectivity no more than M , and activation function ρ by NN L,M,d,ρ with the understanding that for L = 1, the set NN L,M,d,ρ is empty. Moreover, we let NN ∞,M,d,ρ := L∈N NN L,M,d,ρ , NN L,∞,d,ρ := M ∈N NN L,M,d,ρ , NN ∞,∞,d,ρ := L∈N NN L,∞,d,ρ .Now, given a function f : R d → R, we are interested in the theoretically best possible approximation of f by a network Φ ∈ NN ∞,M,d,ρ . Specifically, we will want to know how the approximation quality depends on the connectivity M and what the associated number of bits needed to store the network topology 7 i=1 c i f (· − d i ) is compactly supported, has 7 vanishing moments in x 1 -direction, andĝ(ξ) = 0 for all ξ ∈ [−3, 3] 2 such that ξ 1 = 0. Then, by Theorem 6.4 and Remark 6.7 there exists δ > 0 such that SH α (f, g, δ; Ω) is optimal for E 1/α (Ω; ν). We definewhere we order (A j ) j∈N such that |det(A j )| ≤ |det(A j+1 )|, for all j ∈ N. This construction implies that the α-shearlet system SH α (f, g, δ; Ω) is an affi...
In recent years directional multiscale transformations like the curveletor shearlet transformation have gained considerable attention. The reason for this is that these transforms are -unlike more traditional transforms like wavelets -able to efficiently handle data with features along edges. The main result in [27] confirming this property for shearlets is due to Kutyniok and Labate where it is shown that for very special functions ψ with frequency support in a compact conical wegde the decay rate of the shearlet coefficients of a tempered distribution f with respect to the shearlet ψ can resolve the Wavefront Set of f . We demonstrate that the same result can be verified under much weaker assumptions on ψ, namely to possess sufficiently many anisotropic vanishing moments. We also show how to build frames for L 2 (R 2 ) from any such function. To prove our statements we develop a new approach based on an adaption of the Radon transform to the shearlet structure.
The problem of phase retrieval
This paper develops fundamental limits of deep neural network learning by characterizing what is possible if no constraints are imposed on the learning algorithm and on the amount of training data. Concretely, we consider Kolmogorov-optimal approximation through deep neural networks with the guiding theme being a relation between the complexity of the function (class) to be approximated and the complexity of the approximating network in terms of connectivity and memory requirements for storing the network topology and the associated quantized weights. The theory we develop establishes that deep networks are Kolmogorov-optimal approximants for markedly different function classes, such as unit balls in Besov spaces and modulation spaces. In addition, deep networks provide exponential approximation accuracyi.e., the approximation error decays exponentially in the number of nonzero weights in the network-of the multiplication operation, polynomials, sinusoidal functions, and certain smooth functions. Moreover, this holds true even for one-dimensional oscillatory textures and the Weierstrass function-a fractal function, neither of which has previously known methods achieving exponential approximation accuracy. We also show that in the approximation of sufficiently smooth functions finite-width deep networks require strictly smaller connectivity than finite-depth wide networks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.