2019
DOI: 10.1080/01621459.2018.1537917
|View full text |Cite
|
Sign up to set email alerts
|

On Degrees of Freedom of Projection Estimators With Applications to Multivariate Nonparametric Regression

Abstract: In this paper, we consider the nonparametric regression problem with multivariate predictors. We provide a characterization of the degrees of freedom and divergence for estimators of the unknown regression function, which are obtained as outputs of linearly constrained quadratic optimization procedures; namely, minimizers of the least squares criterion with linear constraints and/or quadratic penalties. As special cases of our results, we derive explicit expressions for the degrees of freedom in many nonparame… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 21 publications
(12 citation statements)
references
References 47 publications
0
12
0
Order By: Relevance
“…. , n onto the space of θ; see [34]. The above characterization can be seen as a consequence of the subgradient inequality for convex functions; see [119,Theorem 25.1,p.…”
Section: Computation Of the Lsementioning
confidence: 99%
See 1 more Smart Citation
“…. , n onto the space of θ; see [34]. The above characterization can be seen as a consequence of the subgradient inequality for convex functions; see [119,Theorem 25.1,p.…”
Section: Computation Of the Lsementioning
confidence: 99%
“…Even in one-dimensional convex regression, as far as we are aware, whetherf n (0+) is a O p (1) random variable is not known; see [52] for some results onf n (0+) and its derivative (also see [8]). This has motivated the study of bounded/penalized shape-restricted LSEs; see e.g., [34,143,84].…”
Section: Some Open Problemsmentioning
confidence: 99%
“…, y n−1 , y n − µ). Similar regularization methods for isotonic regression have been studied by Chen et al (2015), Wu et al (2015) and Luss and Rosset (2017).…”
Section: Dealing With Inconsistency At Boundariesmentioning
confidence: 99%
“…Whenθ s is the linear regression estimator onto a set of predictor variables indexed by the parameter s, the rule in (11) encompasses model selection via C p minimization, which is a classical topic in statistics. In general, tuning parameter selection via SURE minimization has been widely advocated by authors across various problem settings, e.g., Donoho and Johnstone (1995); Johnstone (1999); Zou et al (2007); Zou and Yuan (2008); Tibshirani andTaylor (2011, 2012); Candes et al (2013); Ulfarsson and Solo (2013a,b); Chen et al (2015), just to name a few.…”
Section: Parameter Tuning Via Surementioning
confidence: 99%