2017
DOI: 10.1177/0962280217744588
|View full text |Cite
|
Sign up to set email alerts
|

Collaborative-controlled LASSO for constructing propensity score-based estimators in high-dimensional data

Abstract: Propensity score-based estimators are increasingly used for causal inference in observational studies. However, model selection for propensity score estimation in high-dimensional data has received little attention. In these settings, propensity score models have traditionally been selected based on the goodness-of-fit for the treatment mechanism itself, without consideration of the causal parameter of interest. Collaborative minimum loss-based estimation is a novel methodology for causal inference that takes … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
39
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
9

Relationship

3
6

Authors

Journals

citations
Cited by 36 publications
(39 citation statements)
references
References 53 publications
(134 reference statements)
0
39
0
Order By: Relevance
“…However, such an approach merely conditions on the (selected) covariates, and does not account for the uncertainty induced by the (data-driven) covariate selection procedure. In general, existing postselection inference procedures (see, eg, Berk et al, 55 Chernozhukov et al, 56 and Ju et al, 43 among others) typically consider regularized regression methods, and share the limitation that they lack finite-sample guarantees and attain their desired theoretical properties only for large sample sizes. Various aspects of our proposal (ie, using double selection to prioritize covariates, focusing on the stability of the marginal effect estimator, and employing randomization inference) were chosen so that standard inference would deliver reasonable approximations in finite samples.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, such an approach merely conditions on the (selected) covariates, and does not account for the uncertainty induced by the (data-driven) covariate selection procedure. In general, existing postselection inference procedures (see, eg, Berk et al, 55 Chernozhukov et al, 56 and Ju et al, 43 among others) typically consider regularized regression methods, and share the limitation that they lack finite-sample guarantees and attain their desired theoretical properties only for large sample sizes. Various aspects of our proposal (ie, using double selection to prioritize covariates, focusing on the stability of the marginal effect estimator, and employing randomization inference) were chosen so that standard inference would deliver reasonable approximations in finite samples.…”
Section: Discussionmentioning
confidence: 99%
“…M6 A Wald test using CTMLE with LASSO by fitting a penalized logistic regression model to estimate the PS model. 43 Instead of a forward selection approach, the sequence of candidate PS models is now indexed by different values of the LASSO regularization penalty. As in the discrete CTMLE approach above, the PS model that minimizes the (cross-validated) loss function of the treatment effect estimator is chosen.…”
Section: S0mentioning
confidence: 99%
“…Recently, developed C-TMLE algorithms for continues tuning parameter, with the general theorem of the asymptotic normality of the resulting C-TMLE estimators. Based on this work, further proposed LASSO-C-TMLE, where the PS is estimated by LASSO controlled by C-TMLE, and [Ju et al, 2017e] demonstrated its performance on high-dimensional electronic health dataset. We simply consider the truncation quantile γ as a tuning parameter, and extend the C-TMLE algorithm to select the optimal γ for the estimation of the causal parameter.…”
Section: Introductionmentioning
confidence: 99%
“…cbioportal.org) [18]. Least absolute shrinkage and selection operator (LASSO) regression analysis, a common method of performing regression analysis with high dimensional factors, was used to construct a prognostic model to minimize the level of overfitting [19,20]. Three-fold cross-validation was conducted to reduce the potential instability of the results, and the optimal tuning parameter λ was identified based on a 1-SE (standard error) standard.…”
Section: Construction Of a Prognostic Frg Signaturementioning
confidence: 99%