2008
DOI: 10.1093/poq/nfn013
|View full text |Cite
|
Sign up to set email alerts
|

Validating Health Insurance Coverage Survey Estimates: A Comparison of Self-Reported Coverage and Administrative Data Records

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

3
52
2

Year Published

2011
2011
2021
2021

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 47 publications
(58 citation statements)
references
References 8 publications
3
52
2
Order By: Relevance
“…This is similar to the rate of 6% Nelson et al (2000) found in their study of the BRFSS. However, the estimate from the MEPS-HC suggests less accuracy than that obtained by the Minnesota Tobacco Survey (Davern et al 2005), in which .6% of the privately insured did not report their coverage.…”
Section: Resultsmentioning
confidence: 65%
“…This is similar to the rate of 6% Nelson et al (2000) found in their study of the BRFSS. However, the estimate from the MEPS-HC suggests less accuracy than that obtained by the Minnesota Tobacco Survey (Davern et al 2005), in which .6% of the privately insured did not report their coverage.…”
Section: Resultsmentioning
confidence: 65%
“…Dating back to the 1990s, previous research has shown that estimates of Medicaid coverage based on the Current Population Survey Annual Social and Economic Supplement (CPS ASEC) have consistently produced an undercount of beneficiaries when compared to actual Medicaid enrollment records Card et al 2004;Davern et al 2008;Davern et al 2009;Klerman et al 2005;Lewis et al 1998). We examine two hypotheses related to how respondents interpret questions about health insurance on the CPS ASEC: 1) whether they interpret the question as asking about their current coverage status or if they have difficulty remembering past coverage and 2) whether respondents confuse different programs or only report certain forms of insurance.…”
Section: Introductionmentioning
confidence: 99%
“…A large segment of work in this area has focused on addressing the so-called “Medicaid undercount”, or the well-validated concern that estimates of Medicaid participation drawn from survey data sources are consistently lower than participation rates drawn from administrative data records (Call, Davidson, Davern, Blewett, & Nyman, 2008; Call, Davern, Klerman, & Lynch, 2012; Davern, Call, Ziegenfuss, Davidson, Beebe, & Blewett, 2008; Davern, Klerman, Baugh, Call, & Greenberg, 2009; Klerman, Ringel, & Roth, 2005). Studies have tended to use either an experimental approach, in which a random sample of survey respondents is drawn from administrative records and then survey respondents’ reports of program take-up are cross-checked with the administrative data, or a matching approach, in which administrative data records are identified and linked with respondents drawn from existing survey data sources and overlap between the two sources is examined (Call et al, 2008; Davern et al, 2008; 2009).…”
Section: Introductionmentioning
confidence: 99%
“…Studies have tended to use either an experimental approach, in which a random sample of survey respondents is drawn from administrative records and then survey respondents’ reports of program take-up are cross-checked with the administrative data, or a matching approach, in which administrative data records are identified and linked with respondents drawn from existing survey data sources and overlap between the two sources is examined (Call et al, 2008; Davern et al, 2008; 2009). National studies have found substantial underreporting of Medicaid take-up, such that approximately 42% of respondents identified in administrative data to be Medicaid recipients self-identify as non-recipients in national survey reports (e.g., Davern et al, 2009).…”
Section: Introductionmentioning
confidence: 99%