Cross-validation is the standard approach for tuning parameter selection in many non-parametric regression problems. However its use is less common in change-point regression, perhaps as its prediction error-based criterion may appear to permit small spurious changes and hence be less well-suited to estimation of the number and location of change-points. We show that in fact the problems of cross-validation with squared error loss are more severe and can lead to systematic under-or over-estimation of the number of change-points, and highly suboptimal estimation of the mean function in simple settings where changes are easily detectable. We propose two simple approaches to remedy these issues, the first involving the use of absolute error rather than squared error loss, and the second involving modifying the holdout sets used. For the latter, we provide conditions that permit consistent estimation of the number of change-points for a general change-point estimation procedure. We show these conditions are satisfied for optimal partitioning using new results on its performance when supplied with the incorrect number of change-points. Numerical experiments show that the absolute error approach in particular is competitive with common change-point methods using classical tuning parameter choices when error distributions are well-specified, but can substantially outperform these in misspecified models. An implementation of our methodology is available in the R package crossvalidationCP on CRAN.
We assume a nonparametric regression model with signals given by the sum of a piecewise constant function ands a smooth function. To detect the change-points and estimate the regression functions, we propose PCpluS, a combination of the fused Lasso and kernel smoothing. In contrast to existing approaches, it explicitly uses the assumption that the signal can be decomposed into a piecewise constant and a smooth function when detecting change-points. This is motivated by several applications and by theoretical results about partial linear model. Tuning parameters are selected by cross-validation. We argue that in this setting minimizing the L 1 -loss is superior to minimizing the L 2 -loss. We also highlight important consequences for cross-validation in piecewise constant change-point regression. Simulations demonstrate that our approach has a small average mean square error and detects change-points well, and we apply the methodology to genome sequencing data to detect copy number variations. Finally, we demonstrate its flexibility by combining it with smoothing splines and by proposing extensions to multivariate and filtered data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.