2020
DOI: 10.1017/s0003055420000374
|View full text |Cite
|
Sign up to set email alerts
|

When to Worry about Sensitivity Bias: A Social Reference Theory and Evidence from 30 Years of List Experiments

Abstract: Eliciting honest answers to sensitive questions is frustrated if subjects withhold the truth for fear that others will judge or punish them. The resulting bias is commonly referred to as social desirability bias, a subset of what we label sensitivity bias. We make three contributions. First, we propose a social reference theory of sensitivity bias to structure expectations about survey responses on sensitive topics. Second, we explore the bias-variance trade-off inherent in the choice between direct and indire… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
137
2
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 155 publications
(143 citation statements)
references
References 89 publications
3
137
2
1
Order By: Relevance
“…A 2018 report found that 38% of the female prison population in Senegal is imprisoned for alleged abortion or infanticide [36]; with such powerful legal repercussions, it is plausible that respondents did not feel comfortable disclosing abortion even via an indirect format. A recent review of list experiment results highlighted fear of legal repercussions as an important driver of underreporting in survey research [37]. While we found no evidence for systematic bias in the number of non-sensitive items reported (the design effect test), it could be that respondents accurately represented the number of non-sensitive items experienced but simply did not add abortion to their tally of experiences.…”
Section: Plos Onecontrasting
confidence: 72%
“…A 2018 report found that 38% of the female prison population in Senegal is imprisoned for alleged abortion or infanticide [36]; with such powerful legal repercussions, it is plausible that respondents did not feel comfortable disclosing abortion even via an indirect format. A recent review of list experiment results highlighted fear of legal repercussions as an important driver of underreporting in survey research [37]. While we found no evidence for systematic bias in the number of non-sensitive items reported (the design effect test), it could be that respondents accurately represented the number of non-sensitive items experienced but simply did not add abortion to their tally of experiences.…”
Section: Plos Onecontrasting
confidence: 72%
“…Our empirical analysis contributes to the literature on list experiments by providing the clearest evidence to date that list experiments do induce an additional and nontrivial amount of non-strategic misreporting error in practice, even in relatively educated samples previously thought to be least prone to such behavior (Kramon and Weghorst, 2019). This in turn informs applied survey research on sensitive topics by highlighting the need for researchers deciding between a direct question and a list experiment to consider not just the well-known trade-off between strategic misreporting under the direct question and the statistical inefficiency of the list experiment (Blair, Coppock and Moor, 2020), but also the misreporting trade-off between direct question strategic misreporting and list experiment non-strategic misreporting.…”
Section: Introductionmentioning
confidence: 98%
“…Different techniques to unobtrusively ask sensitive questions do exist, out of which the list experiment is one. In general, these techniques come at the cost of statistical efficiency, but when bias is large-like in the study at hand-the bias-variance trade-off should come down in favor of unbiased (or less biased) estimators (Blair et al 2018). In essence, this means that researchers will need larger samples and more sophisticated survey designs to accurately capture sensitive topics like corruption victimization.…”
Section: Discussionmentioning
confidence: 99%
“…Second, I argue that direct questions about corruption experiences are likely to be sensitive and hence subject to 'sensitivity bias' (or 'social desirability bias') (Blair et al 2018). Research shows that citizens around the world in general have a strong distaste for corruption and bribe giving (see below).…”
Section: Introductionmentioning
confidence: 99%