Reproducing kernel Hilbert spaces are an important family of function spaces and play useful roles in various branches of analysis and applications including the kernel machine learning. When the domain of definition is compact, they can be characterized as the image of the square root of an integral operator, by means of the Mercer theorem. The purpose of this paper is to extend the Mercer theorem to noncompact domains, and to establish a functional analysis characterization of the reproducing kernel Hilbert spaces on general domains.
In this paper we study the learning performance of regularized least square regression with α-mixing and φ-mixing inputs. The capacity independent error bounds and learning rates are derived by means of an integral operator technique. Even for independent samples our learning rates improve those in the literature. The results are sharp in the sense that when the mixing conditions are strong enough the rates are shown to be close to or the same as those for learning with independent samples. They also reveal interesting phenomena of learning with dependent samples: (i) dependent samples contain less information and lead to worse error bounds than independent samples; (ii) the influence of the dependence between samples to the learning process decreases as the smoothness of the target function increases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.