2004
DOI: 10.1046/j.1369-7412.2003.05316.x
|View full text |Cite
|
Sign up to set email alerts
|

Smoothing Spline Gaussian Regression: More Scalable Computation via Efficient Approximation

Abstract: Smoothing splines via the penalized least squares method provide versatile and effective nonparametric models for regression with Gaussian responses. The computation of smoothing splines is generally of the order "O"("n"-super-3), "n" being the sample size, which severely limits its practical applicability. We study more scalable computation of smoothing spline regression via certain low dimensional approximations that are asymptotically as efficient. A simple algorithm is presented and the Bayes model that is… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
205
0

Year Published

2011
2011
2024
2024

Publication Types

Select...
9
1

Relationship

1
9

Authors

Journals

citations
Cited by 247 publications
(206 citation statements)
references
References 28 publications
1
205
0
Order By: Relevance
“…Various error distributions were applied to the GAMs and, although the resulting fits were comparable, models using transformed cover data with a normal (Gaussian) error distribution, identity link and a gamma value of 1.4 (Kim and Gu 2004) gave the most reliable and robust outcomes. Several other error distributions, for instance a beta distribution, caused an estimate bias when a series included many sequential zero values (BCM, MA, SP and CCA).…”
Section: Univariate Analysesmentioning
confidence: 99%
“…Various error distributions were applied to the GAMs and, although the resulting fits were comparable, models using transformed cover data with a normal (Gaussian) error distribution, identity link and a gamma value of 1.4 (Kim and Gu 2004) gave the most reliable and robust outcomes. Several other error distributions, for instance a beta distribution, caused an estimate bias when a series included many sequential zero values (BCM, MA, SP and CCA).…”
Section: Univariate Analysesmentioning
confidence: 99%
“…The arbitrary selection needs to more research and means that if a satisfactory theory for modelling with smooth function is to be extended, the number of knots must be shown. The smoothing parameter, λ, controls the specification between model fit and model smoothness (Kim and Gu, 2004). If λ = 0, means that there is no penalty and if λ→∞, means that something like a straight line is going to estimate f (Wood and Augustin, 2002;Eilers and Marx, 2010).…”
Section: Stationary Datamentioning
confidence: 99%
“…Quantile-quantile plots and histograms of residuals were examined to ensure that the distributional assumption was suitable, and plots of linear predictors against residuals revealed variance to be approximately constant for all models. In all cases, the value of γ used in the GCV scores was set at 1.4 to avoid over-fitting (Kim & Gu 2004, Wood 2006) and a gamma distribution was used with a log link function.…”
Section: Functional Responsementioning
confidence: 99%