2012
DOI: 10.1007/s11222-012-9366-0
|View full text |Cite
|
Sign up to set email alerts
|

Multilevel structured additive regression

Abstract: Standard-Nutzungsbedingungen:Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Zwecken und zum Privatgebrauch gespeichert und kopiert werden.Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich machen, vertreiben oder anderweitig nutzen.Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, gelten abweichend von diesen Nutzungsbedingungen die in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
53
0
1

Year Published

2013
2013
2024
2024

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 49 publications
(54 citation statements)
references
References 42 publications
0
53
0
1
Order By: Relevance
“…Gaussian random e↵ects and⌘ jk (x;˜ jk ) represents a full predictor of nested covariates, e.g., including a discrete regional spatial e↵ect. This way, potential costly operations in updating Algorithm A2a and A2b can be avoided since the number of observations in⌘ jk (x;˜ jk ) is equal to the number of coe cients in jk , which is usually much smaller than the actual number of observations n. Moreover, the full conditionals (see also Section 4.2) for˜ jk are Gaussian regardless of the response distribution and leads to highly e cient estimation algorithms, see Lang, Umlauf, Wechselberger, Harttgen, and Kneib (2014).…”
Section: Multilevel E↵ectsmentioning
confidence: 99%
See 1 more Smart Citation
“…Gaussian random e↵ects and⌘ jk (x;˜ jk ) represents a full predictor of nested covariates, e.g., including a discrete regional spatial e↵ect. This way, potential costly operations in updating Algorithm A2a and A2b can be avoided since the number of observations in⌘ jk (x;˜ jk ) is equal to the number of coe cients in jk , which is usually much smaller than the actual number of observations n. Moreover, the full conditionals (see also Section 4.2) for˜ jk are Gaussian regardless of the response distribution and leads to highly e cient estimation algorithms, see Lang, Umlauf, Wechselberger, Harttgen, and Kneib (2014).…”
Section: Multilevel E↵ectsmentioning
confidence: 99%
“…This is much smaller than the total number of observations of the data set and duplicated rows in the corresponding design matrix can be avoided within the model fitting algorithms. Therefore, we implemented updating functions U jk (·) that support shrinkage of the design matrices based on unique covariate observations, using the highly-e cient algorithm of Lang et al (2014). This essentially employs a reduced form of the diagonal weight matrix W kk in the IWLS algorithm and computes the reduced partial residual vector from z k ⌘ usage within bamlss see also the documentation of estimation engines bfit() and GMCMC() and the corresponding updating functions bfit_iwls() and GMCMC_iwls().…”
Section: Cnorm_bamlss()mentioning
confidence: 99%
“…Recently, Lang et al [2013] proposed a multilevel version of structured additive regression models where it is assumed that the regression coefficients β j of a term f j in (3) may themselves obey a regression model with structured additive predictor, i.e.…”
Section: Multilevel Frameworkmentioning
confidence: 99%
“…Inference can then be based on optimizing a generalized cross validation criterion [Wood, 2004], a mixed model representation [Ruppert et al, 2003, Fahrmeir et al, 2004, Wood, 2008 or Markov chain Monte Carlo (MCMC) simulations [Brezger and Lang, 2006, Jullion and Lambert, 2007, Lang et al, 2013. The framework of generalized additive models for location, scale and shape (GAMLSS) introduced by Rigby and Stasinopoulos [2005] allows to extend generalized additive models to more complex response distributions where not only the expectation but multiple parameters are related to additive predictors via suitable link functions.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation