2011
DOI: 10.1080/02664763.2011.573542
|View full text |Cite
|
Sign up to set email alerts
|

Cross-validating fit and predictive accuracy of nonlinear quantile regressions

Abstract: The paper proposes a cross-validation method to address the question of specification search in a multiple nonlinear quantile regression framework. Linear parametric, spline-based partially linear and kernelbased fully nonparametric specifications are contrasted as competitors using cross-validated weighted L 1 -norm based goodness-of-fit and prediction error criteria. The aim is to provide a fair comparison with respect to estimation accuracy and/or predictive ability for different semi-and nonparametric spec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 27 publications
(30 reference statements)
0
2
0
Order By: Relevance
“…To evaluate the performance of the chosen linear and nonlinear QRPC models (the order l and k are selected), following the suggestion of Haupt et al (2011), we provide a model comparison procedure by examining the estimation accuracy and prediction ability of different models.…”
Section: Model Comparisonmentioning
confidence: 99%
“…To evaluate the performance of the chosen linear and nonlinear QRPC models (the order l and k are selected), following the suggestion of Haupt et al (2011), we provide a model comparison procedure by examining the estimation accuracy and prediction ability of different models.…”
Section: Model Comparisonmentioning
confidence: 99%
“…RMSE is most useful when large errors are particularly undesirable since errors are squared, giving a relatively higher weight to larger errors, before being averaged. Additionally, we employ the average weighted absolute error (WMAE) [184] to evaluate the performance of prediction when under-prediction is considered more costly than over-prediction.…”
Section: Performance Evaluation Of Prediction Modelsmentioning
confidence: 99%