2019
DOI: 10.1002/sim.8130
|View full text |Cite
|
Sign up to set email alerts
|

MEBoost: Variable selection in the presence of measurement error

Abstract: We present a novel method for variable selection in regression models when covariates are measured with error. The iterative algorithm we propose, Measurement Error Boosting (MEBoost), follows a path defined by estimating equations that correct for covariate measurement error. We illustrate the use of MEBoost in practice by analyzing data from the Box Lunch Study, a clinical trial in nutrition where several variables are based on self‐report and, hence, measured with error, where we are interested in performin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(19 citation statements)
references
References 22 publications
0
19
0
Order By: Relevance
“…There are some methods fall out the category of regularization methods. For instance, authors in [14] first performed model selection and then made use of the corrected least squares on the reduced model for estimation; Chen and Caramanis [11] modified the orthogonal matching pursuit algorithm for variable selection in errors-in-variables linear regression; the measurement error boosting (MEBoost) algorithm [8] was based on the idea of classical estimation equation and implemented measurement error corrected variable selection at every iterative path.…”
Section: Introductionmentioning
confidence: 99%
“…There are some methods fall out the category of regularization methods. For instance, authors in [14] first performed model selection and then made use of the corrected least squares on the reduced model for estimation; Chen and Caramanis [11] modified the orthogonal matching pursuit algorithm for variable selection in errors-in-variables linear regression; the measurement error boosting (MEBoost) algorithm [8] was based on the idea of classical estimation equation and implemented measurement error corrected variable selection at every iterative path.…”
Section: Introductionmentioning
confidence: 99%
“…However, the framework in which the two models were compared (n=100, p=250, s=3) is smaller than the dimensions considered in our simulations and no correlation structure was considered for the measurement error, which may explain the differing results. Brown et al [19] included the CoCoLasso in their investigation of the MEBoost, and showed that the CoCoLasso had consistently lower sensitivity but higher specificity as compared to the naive lasso. Our simulations show consistently lower specificity.…”
Section: Discussionmentioning
confidence: 99%
“…Correction of penalized regression is not the only method available to correct for measurement error. For example the two stage non-penalized corrected least squares [18] separates model selection and estimation in two different steps, using the corrected least squares for the final estimation; the Measurement Error Boosting [19], is an iterative functional gradient descent type algorithm that generates measurement error corrected variable selection paths. Chen and Caramanis [20] developed a modified version of the orthogonal matching pursuit algorithm, also to deal with measurement error in the covariates in high-dimensional regression.…”
Section: Introductionmentioning
confidence: 99%
“…Univariate Cox regression analyses was performed to screen out hypoxia genes related to OS ( P < 0.001). LASSO regression can avoid over-fitting, further optimize the genes selected after univariate cox regression, and delete highly related genes [ 45 ]. Finally, multivariate COX regression analysis was carried out step by step to set up a prognostic model.…”
Section: Methodsmentioning
confidence: 99%