2012
DOI: 10.1093/biomet/ass043
|View full text |Cite
|
Sign up to set email alerts
|

Scaled sparse linear regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
485
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 428 publications
(490 citation statements)
references
References 26 publications
5
485
0
Order By: Relevance
“…In general, the asymptotic equivalence (4.7) follows from the orthogonality of parameters β and σ 2 in the log-likelihood function, rather than the use of the Lasso estimators β and σ 2 . This implies that, for the estimation of (β, σ), we can use alternative estimators such as the scaled Lasso (Sun and Zhang, 2012),…”
Section: Estimation Of Unknown Variance σmentioning
confidence: 99%
“…In general, the asymptotic equivalence (4.7) follows from the orthogonality of parameters β and σ 2 in the log-likelihood function, rather than the use of the Lasso estimators β and σ 2 . This implies that, for the estimation of (β, σ), we can use alternative estimators such as the scaled Lasso (Sun and Zhang, 2012),…”
Section: Estimation Of Unknown Variance σmentioning
confidence: 99%
“…Scaled sparse linear regression by Sun and Zhang (2012) gives a general approach to regression and penalization, with special focus on the LASSO. The idea is to estimate the parameters in the model and the noise level.…”
Section: Related Workmentioning
confidence: 99%
“…Oracle inequalities are proved for prediction, estimation of noise level and regression coefficients. An implementation can be found in the R package scalreg (Sun 2013), the method can be used with the function scalreg.…”
Section: Related Workmentioning
confidence: 99%
“…To integrate information from multiple GWAS, we propose the following optimization problem Where γ is the regularization parameter controlling the sparsity of closely related to the scaled Lasso problem [21]. Here we emphasize on integration of information from multiple GWAS and use the group penalty to achieve this goal.…”
Section: Algorithmmentioning
confidence: 99%