2015
DOI: 10.1002/sim.6504
|View full text |Cite
|
Sign up to set email alerts
|

Model‐based standardization to adjust for unmeasured cluster‐level confounders with complex survey data

Abstract: Model-based standardization uses a statistical model to estimate a standardized, or unconfounded, population-averaged effect. With it, one can compare groups had the distribution of confounders been identical in both groups to that of the standard population. We develop two methods for model-based standardization with complex survey data that accommodate a categorical confounder that clusters the individual observations into a very large number of subgroups. The first method combines a random-intercept general… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 17 publications
0
4
0
Order By: Relevance
“…Specific methods for use in survival settings also exist and are the subject of ongoing research (26,33,61), but they are not discussed here. Also not covered here is the special case of uncontrolled confounding in multilevel or mixed model settings (10,32,39). Throughout, we focus on study settings in which the exposure-outcome association or effect is quantified using risk difference, mean difference, risk ratio (as in cohort studies), or odds ratio (as in case-control studies).…”
Section: Scopementioning
confidence: 99%
“…Specific methods for use in survival settings also exist and are the subject of ongoing research (26,33,61), but they are not discussed here. Also not covered here is the special case of uncontrolled confounding in multilevel or mixed model settings (10,32,39). Throughout, we focus on study settings in which the exposure-outcome association or effect is quantified using risk difference, mean difference, risk ratio (as in cohort studies), or odds ratio (as in case-control studies).…”
Section: Scopementioning
confidence: 99%
“…The parametric G-formula can provide valid PAF or GIF estimates by overcoming the limitations of the conventional methods for PAF and GIF estimation through generating a counterfactual population and using appropriate models. 48 , 49 Vangen-Lønne et al 50 applied the parametric G-formula to investigate the effect of joint interventions for complete or partial elimination of stroke risk factors on the 18-year cumulative stroke risk. Their findings showed that the risk of stroke would be reduced by 28% if SBP decreased to less than 140 mmHg in all individuals.…”
Section: Discussionmentioning
confidence: 99%
“…Standard GLMM software, such as SAS PROC GLIMMIX or the R glmer function, can be used to estimate the vector of parameters θ = ( α 0 , β , γ , α X , α Z , α N , τ ) in the between‐within model. Brumback et al offered an example of a different choice for q (·), letting it depend on the maximum of X ij across j = 1,…, N i rather than on trueX¯i.…”
Section: Outcome‐modeling Approachmentioning
confidence: 99%
“…However, when there is a categorical confounder that clusters the observations into a large number of small categories, model‐based standardization can fail due to the Neyman‐Scott problem, wherein maximum likelihood estimates of the parameters of the statistical model can be inconsistent. Recent research has extended the exposure‐modeling approach to address this problem by incorporating random effects; our goal is to investigate incorporation of random effects into the outcome‐modeling approach and to compare this to the exposure modeling approaches, particularly that of Skinner and D'Arrigo …”
Section: Introductionmentioning
confidence: 99%