We consider non-parametric estimation problems in the presence of dependent data, notably non-parametric regression with random design and non-parametric density estimation. The proposed estimation procedure is based on a dimension reduction. The minimax optimal rate of convergence of the estimator is derived assuming a sufficiently weak dependence characterized by fast decreasing mixing coefficients. We illustrate these results by considering classical smoothness assumptions. However, the proposed estimator requires an optimal choice of a dimension parameter depending on certain characteristics of the function of interest, which are not known in practice. The main issue addressed in our work is an adaptive choice of this dimension parameter combining model selection and Lepski's method. It is inspired by the recent work of Goldenshluger and Lepski [2011]. We show that this data-driven estimator can attain the lower risk bound up to a constant provided a fast decay of the mixing coefficients.
We consider non-parametric estimation problems in the presence of dependent data, notably non-parametric regression with random design and non-parametric density estimation. The proposed estimation procedure is based on a dimension reduction. The minimax optimal rate of convergence of the estimator is derived assuming a sufficiently weak dependence characterized by fast decreasing mixing coefficients. We illustrate these results by considering classical smoothness assumptions. However, the proposed estimator requires an optimal choice of a dimension parameter depending on certain characteristics of the function of interest, which are not known in practice. The main issue addressed in our work is an adaptive choice of this dimension parameter combining model selection and Lepski's method. It is inspired by the recent work of Goldenshluger and Lepski [2011]. We show that this data-driven estimator can attain the lower risk bound up to a constant provided a fast decay of the mixing coefficients.
We consider the estimation of a structural function which models a nonparametric relationship between a response and an endogenous regressor given an instrument in presence of dependence in the data generating process. Assuming an independent and identically distributed (iid.) sample it has been shown in Johannes and Schwarz [2011] that a least squares estimator based on dimension reduction and thresholding can attain minimax-optimal rates of convergence up to a constant. As this estimation procedure requires an optimal choice of a dimension parameter with regard amongst others to certain characteristics of the unknown structural function we investigate its fully data-driven choice based on a combination of model selection and Lepski's method inspired by Goldenshluger and Lepski [2011]. For the resulting fully data-driven thresholded least squares estimator a non-asymptotic oracle risk bound is derived by considering either an iid. sample or by dismissing the independence assumption. In both cases the derived risk bounds coincide up to a constant assuming sufficiently weak dependence characterised by a fast decay of the mixing coefficients. Employing the risk bounds the minimax optimality up to constant of the estimator is established over a variety of classes of structural functions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.