2018
DOI: 10.1007/s13171-017-0122-6
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Shrinkage for Generalized Linear Mixed Models Under Linear Restrictions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 24 publications
0
2
0
Order By: Relevance
“…One of the choices of this matrix is the identity matrix which will be used in the simulation study. Other choices of this matrix are also available, for example, W = I −1 or a general W which gives a loss Proof Similar proofs can be found in Thomson and Hossain (2018). □ Remark 5.2 Under H 0 ∶ A = h , that is, when = 0 , ̂ R is the best choice and it strongly dominates ̂ F .…”
Section: Theorem 52 If the Conditions Of Theorem 51 Hold Then The mentioning
confidence: 98%
“…One of the choices of this matrix is the identity matrix which will be used in the simulation study. Other choices of this matrix are also available, for example, W = I −1 or a general W which gives a loss Proof Similar proofs can be found in Thomson and Hossain (2018). □ Remark 5.2 Under H 0 ∶ A = h , that is, when = 0 , ̂ R is the best choice and it strongly dominates ̂ F .…”
Section: Theorem 52 If the Conditions Of Theorem 51 Hold Then The mentioning
confidence: 98%
“…This algorithm is available in the R programming language, namely glmnet package [9]. Some researchers have discussed variable selection procedures in GLMM using the L1 penalty, including Thomson and Hossain (2018) [10], Groll and Tutz (2014) [2], Schelldorfer et al (2011) [11], Ibrahim et al (2010) [12]. The LASSO GLMM produces stable estimations because penalty L1 can select the important predictor variables used in GLMM [2].…”
Section: Introductionmentioning
confidence: 99%