2016
DOI: 10.1080/01621459.2015.1008363
|View full text |Cite
|
Sign up to set email alerts
|

Variable Selection With Prior Information for Generalized Linear Models via the Prior LASSO Method

Abstract: LASSO is a popular statistical tool often used in conjunction with generalized linear models that can simultaneously select variables and estimate parameters. When there are many variables of interest, as in current biological and biomedical studies, the power of LASSO can be limited. Fortunately, so much biological and biomedical data have been collected and they may contain useful information about the importance of certain variables. This paper proposes an extension of LASSO, namely, prior LASSO (pLASSO), t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
65
0
2

Year Published

2016
2016
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 81 publications
(67 citation statements)
references
References 44 publications
0
65
0
2
Order By: Relevance
“…These are generally computationally very demanding for typical high-dimensional settings with a large number of variables. Moreover, frequentist solutions have been proposed, which usually require additional tuning parameter(s) to cross-validate (Bergersen, Glad, & Lyng, 2011;Jiang, He, & Zhang, 2016b) or a group penalty (Meier, van de Geer, & Bühlmann, 2008;Simon, Friedman, Hastie, & Tibshirani, 2013). The latter may perform less well than EB-based regularization per group when the number of groups is small (Novianti et al, 2017).…”
Section: Discussion and Extensionsmentioning
confidence: 99%
“…These are generally computationally very demanding for typical high-dimensional settings with a large number of variables. Moreover, frequentist solutions have been proposed, which usually require additional tuning parameter(s) to cross-validate (Bergersen, Glad, & Lyng, 2011;Jiang, He, & Zhang, 2016b) or a group penalty (Meier, van de Geer, & Bühlmann, 2008;Simon, Friedman, Hastie, & Tibshirani, 2013). The latter may perform less well than EB-based regularization per group when the number of groups is small (Novianti et al, 2017).…”
Section: Discussion and Extensionsmentioning
confidence: 99%
“…There has been some recent work on how to incorporate prior information. For example, Wang et al (2013) developed a LASSO method by assigning different prior distributions to each subset according to a modified Bayesian information criterion that incorporates prior knowledge on both the network structure and the pathway information, and Jiang et al (2015) proposed “prior lasso” (plasso) to balance between the prior information and the data. A natural extension of the current work is to develop a variable screening approach that incorporates more complex prior knowledge, such as the network structure or the spatial information of the covariates.…”
Section: Discussionmentioning
confidence: 99%
“…Condition (C) is justified by Jiang and Zhang () and Jiang et al . () and is similar to condition 3 in Bradic et al . ().…”
Section: Theoretical Propertiesmentioning
confidence: 97%
“…In practice, data are often standardized at the preprocessing stage, which may warrant the reasonableness of this condition. Condition (C) is justified by Jiang and Zhang (2013) and Jiang et al (2016) and is similar to condition 3 in Bradic et al (2011). The condition ensures the light tail of the response variable Y and is satisfied by a wide range of outcome data, including Gaussian and discrete data (such as binary and count data).…”
Section: Introductionmentioning
confidence: 97%