In recent years, generative adversarial networks (GANs) have demonstrated impressive experimental results while there are only a few works that foster statistical learning theory for GANs. In this work, we propose an infinite dimensional theoretical framework for generative adversarial learning. Assuming the class of uniformly bounded k-times α-Hölder differentiable (C k,α ) and uniformly positive densities, we show that the Rosenblatt transformation induces an optimal generator, which is realizable in the hypothesis space of C k,α generators. With a consistent definition of the hypothesis space of discriminators, we further show that in our framework the Jensen-Shannon (JS) divergence between the distribution induced by the generator from the adversarial learning procedure and the data generating distribution converges to zero. As our convenient framework avoids modeling errors both for generators and discriminators, by the error decomposition for adversarial learning it suffices that the sampling error vanishes in the large sample limit. To prove this, we endow the hypothesis spaces of generators and discriminators with C k,α ′ -topologies, 0 < α ′ < α, which render the hypothesis spaces to compact topological spaces such that the uniform law of large numbers can be applied. Under sufficiently strict regularity assumptions on the density of the data generating process, we also provide rates of convergence based on concentration and chaining. To this avail, we first prove subgaussian properties of the empirical process indexed by generators and discriminators. Furthermore, as covering numbers of bounded sets in C k,α -Hölder spaces with respect to the L ∞ -norm lead to a convergent metric entropy integral if k is sufficiently large, we obtain a finite constant in Dudley's inequality. This, in combination with McDiarmid's inequality, provides explicit rate estimates for the convergence of the GAN learner to the true probability distribution in JS divergence.
We consider one-dimensional random Schrödinger operators with a background potential, arising in the inverse problem of scattering. We study the influence of the background potential on the essential spectrum of the random Schrödinger operator and obtain Anderson Localization for a larger class of one-dimensional Schrödinger operators. Further, we prove the existence of the integrated density of states and give a formula for it.Here f is a real-valued function and q k (k ∈ Z) are independent random variables with a common
Recently a modified Drude model of the valence electron gas in metals was investigated mathematically. The implications of this model suggest that the thermal noise voltage increases for example with the length of the metallic conductor. This observation prompted us to carry out some innovative experiments whose outcomes confirmed qualitatively the Drude model. We discuss some implications of this model in the light of the performed measurements.
We consider the scattering of spherically-symmetric acoustic waves by an anisotropic medium and a cavity. While there is a large number of recent works devoted to the scattering problems with cavities, existence of an in nite set of transmission eigenvalues is an open problem in general. In this paper we prove existence of an in nite set of transmission eigenvalues for anisotropic Helmholtz and Schrödinger equations in a spherically-symmetric domain with a cavity. Further in this paper we consider the corresponding inverse problem. Under some assumptions we prove the uniqueness in the inverse problem.We consider the following problemis the open ball with the center at the origin of ℝ 3 and radius 1 , 0 = (0, 0 ) ( 0 < 1 ) is a concentric ball with a smaller radius, ( , ) is the pair of elds, ∇ denotes the gradient operator, Δ denotes the Laplacian, ⋅ is the formal inner product in ℝ 3 , represents the outward unit normal to 1 , ⩾ 0 is the spectral parameter, : 1 \ 0 → ℂ is a function corresponding to the refractive index of the medium at location , : 1 \ 0 → ℝ is a given function corresponding to the anisotropy of the medium. At the beginning we will assume that ∈ ∞ ( 1 \ 0 ), ∈ 1 ( 1 \ 0 ) and Im ( ) ⩾ 0 (a.e. on 1 \ 0 ), (1.6) ( ) ⩾ 0 ( ∈ 1 \ 0 ) with some positive constant 0 . The problem (1.1)-(1.5) was rst investigated in [3]. Let us bring some de nitions from there. De ne 1 Δ ( 1 \ 0 ) = { ∈ 1 ( 1 \ 0 ) : Δ ∈ 2 ( 1 \ 0 )}. De nition 1.1. A weak solution to (1.1)-(1.5) is a pair of functions ( , ) ∈ 1 ( 1 \̄ 0 ) × 1 ( 1 ) satisfying (1.1), (1.2) in the distributional sense such that = 0 on 0 , − ∈ 1 Δ ( 1 \ 0 ) and − = 0, ⋅ ∇ − = 0 on 1 . Brought to you by | Stockholms Universitet Authenticated Download Date | 8/5/15 11:40 AM
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.