2021
DOI: 10.1017/pan.2021.10
|View full text |Cite
|
Sign up to set email alerts
|

The Misreporting Trade-Off Between List Experiments and Direct Questions in Practice: Partition Validation Evidence from Two Countries

Abstract: To reduce strategic misreporting on sensitive topics, survey researchers increasingly use list experiments rather than direct questions. However, the complexity of list experiments may increase nonstrategic misreporting. We provide the first empirical assessment of this trade-off between strategic and nonstrategic misreporting. We field list experiments on election turnout in two different countries, collecting measures of respondents’ true turnout. We detail and apply a partition validation method which uses … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
13
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 14 publications
(14 citation statements)
references
References 28 publications
0
13
1
Order By: Relevance
“…Our main practical recommendations are that researchers should deal with CM only with the greatest care and caution. With respect to ICT, our results contradict those by Kuhn and Vivyan (2020) and Riambau and Ostwald (2020)-we do not find substantial rates of false positives and hence argue that the use of ICT is less problematic than the use of CM. This, however, should be further investigated in future studies.…”
Section: Introductioncontrasting
confidence: 94%
See 1 more Smart Citation
“…Our main practical recommendations are that researchers should deal with CM only with the greatest care and caution. With respect to ICT, our results contradict those by Kuhn and Vivyan (2020) and Riambau and Ostwald (2020)-we do not find substantial rates of false positives and hence argue that the use of ICT is less problematic than the use of CM. This, however, should be further investigated in future studies.…”
Section: Introductioncontrasting
confidence: 94%
“…They further conduct a small-scale meta-analysis on previous literature (Holbrook and Krosnick 2010;Ahlquist, Mayer, and Jackman 2014; Kiewiet de Jonge and Nickerson 2014) and conclude, although these studies arrive at inconsistent conclusions, that "inflation is more likely than not" (Riambau and Ostwald 2020, p. 2). Kuhn and Vivyan (2020) present an individuallevel validation study based on reported voter turnout and official turnout records in a New Zealand and a London (UK) sample. Their results are alarming with respect to the validity of ICT and the "more-is-better" (or in this case: "less-is-better") assumption: not only does ICT not alleviate strategic misreporting to the turnout question as compared to DQ (i.e., reduce false negative reports among the real nonvoters), it actually boosts false positive reports (i.e., real voters reporting they did not vote).…”
Section: Introductionmentioning
confidence: 99%
“…In addition to the empirical shortcomings, the methodological concern remains. It has been argued that while the method of list experiments has been increasingly used for political science research, the issues deriving from measurement errors such as misreporting, lying, and ceiling and floor effects have been properly addressed (Aronow et al, 2015; Blair et al, 2019; Kuhn and Vivyan, 2021). Recent studies have proposed a variety of statistical tests and estimation to deal with them (Arrow et al, 2015; Blair et al, 2019; Kuhn and Vivyan, 2021).…”
Section: Discussionmentioning
confidence: 99%
“…However, we empirically examine the overall effects of these question designs on turnout misreporting across both non-voters and voters (as well as examining effects specifically among non-voters and specifically among voters) as our main aim is not to test self-affirmation theory but rather to assess whether these question designs based on the self-affirmation framework significantly improve self-reported measures of turnout in a typical online post-election survey. Given the greater relative frequency of voters compared to non-voters in most post-election survey samples, even small decreases among voters' response accuracy can overwhelm any accuracy improvements among non-voters, and would thus reduce the overall accuracy of the alternative turnout questions designs compared to the direct question (Kuhn and Vivyan, 2021).…”
Section: Methodsmentioning
confidence: 99%
“…In particular, our research highlights the difficulties of using more complex question designs to reduce social desirability bias in online surveys. These question designs can face engagement problems and may increase respondent satisficing compared to standard direct questions (e.g., Kuhn and Vivyan, 2021).…”
mentioning
confidence: 99%