In recent years, generative adversarial networks (GANs) have demonstrated impressive experimental results while there are only a few works that foster statistical learning theory for GANs. In this work, we propose an infinite dimensional theoretical framework for generative adversarial learning. Assuming the class of uniformly bounded k-times α-Hölder differentiable (C k,α ) and uniformly positive densities, we show that the Rosenblatt transformation induces an optimal generator, which is realizable in the hypothesis space of C k,α generators. With a consistent definition of the hypothesis space of discriminators, we further show that in our framework the Jensen-Shannon (JS) divergence between the distribution induced by the generator from the adversarial learning procedure and the data generating distribution converges to zero. As our convenient framework avoids modeling errors both for generators and discriminators, by the error decomposition for adversarial learning it suffices that the sampling error vanishes in the large sample limit. To prove this, we endow the hypothesis spaces of generators and discriminators with C k,α ′ -topologies, 0 < α ′ < α, which render the hypothesis spaces to compact topological spaces such that the uniform law of large numbers can be applied. Under sufficiently strict regularity assumptions on the density of the data generating process, we also provide rates of convergence based on concentration and chaining. To this avail, we first prove subgaussian properties of the empirical process indexed by generators and discriminators. Furthermore, as covering numbers of bounded sets in C k,α -Hölder spaces with respect to the L ∞ -norm lead to a convergent metric entropy integral if k is sufficiently large, we obtain a finite constant in Dudley's inequality. This, in combination with McDiarmid's inequality, provides explicit rate estimates for the convergence of the GAN learner to the true probability distribution in JS divergence.