2016
DOI: 10.29115/sp-2016-0008
|View full text |Cite
|
Sign up to set email alerts
|

The Impact of Parenthetical Phrases on Interviewers� and Respondents� Processing of Survey Questions

Abstract: Many surveys contain sets of questions (e.g., batteries), in which the same phrase, such as a reference period or a set of response categories, applies across the set. When formatting questions for interviewer administration, question writers often enclose these repeated phrases in parentheses to signal that interviewers have the option of reading the phrase. Little research, however, examines what impact this practice has on data quality. We explore whether the presence and use of parenthetical statements is … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(10 citation statements)
references
References 8 publications
0
10
0
Order By: Relevance
“…Additionally, question characteristics appear as a package (Dykema et al in press), and as such, we cannot disentangle how different question features interact or moderate each other (e.g., attitudinal questions with different types of response options). Despite these limitations, much of the work on respondent-interviewer interaction is conducted in face-to-face surveys and looks at one question at a time (e.g., Suchman and Jordan 1990;Dykema et al 1997), and less is known about telephone surveys, especially those conducted in a contemporary context (but see Dykema et al 2016). While this study cannot assess some question types and question characteristics, it extends our knowledge about respondent-interviewer interaction to contemporary telephone surveys across multiple items.…”
Section: Discussionmentioning
confidence: 95%
See 2 more Smart Citations
“…Additionally, question characteristics appear as a package (Dykema et al in press), and as such, we cannot disentangle how different question features interact or moderate each other (e.g., attitudinal questions with different types of response options). Despite these limitations, much of the work on respondent-interviewer interaction is conducted in face-to-face surveys and looks at one question at a time (e.g., Suchman and Jordan 1990;Dykema et al 1997), and less is known about telephone surveys, especially those conducted in a contemporary context (but see Dykema et al 2016). While this study cannot assess some question types and question characteristics, it extends our knowledge about respondent-interviewer interaction to contemporary telephone surveys across multiple items.…”
Section: Discussionmentioning
confidence: 95%
“…Surprisingly, questions where there was a response task or concept mismatch between questions and response options had a higher rate of adequate answers and fewer requests for clarification. From this analysis, we do not know whether interviewers preemptively changed the question wording in order to address these task and conceptual misalignments (similar to an interviewer decision to read parentheticals; Dykema et al 2016). Future research will examine the interviewer-respondent interaction on mismatched questions in more detail.…”
Section: Discussionmentioning
confidence: 98%
See 1 more Smart Citation
“…The question features summarized here give interviewers discretion over what to read, increasing interviewer burden (Japec 2008). Parenthetical information in a question may be skipped by interviewers, especially if it seems ancillary (Olson and Smyth 2015;Dykema, Schaeffer, Garbarski, Nordheim, Banghart 2016). Similarly, for many battery items reading the question stem and response options is optional (i.e., those appearing after the first or second item) (Ongena and Dijkstra 2007;Olson, Smyth and Cochran 2018).…”
Section: Questions Requiring Interviewer Decisionsmentioning
confidence: 99%
“…Because interviewers may adapt question reading to respondents who are particularly likely to have problems (Dykema et al 2016), we control for common measures of respondent cognitive ability, age, and education (e.g., Belli, Weiss, and Lepkowski 1999;Cannell, Fowler and Marquis 1968). Age in years is included as a grand-mean-centered continuous variable (mean WLT: 61.34; NOP: 54.95).…”
Section: Control Variablesmentioning
confidence: 99%