2014
DOI: 10.1002/job.1962
|View full text |Cite
|
Sign up to set email alerts
|

Best practice recommendations for data screening

Abstract: Survey respondents differ in their levels of attention and effort when responding to items. There are a number of methods researchers may use to identify respondents who fail to exert sufficient effort in order to increase the rigor of analysis and enhance the trustworthiness of study results. Screening techniques are organized into three general categories, which differ in impact on survey design and potential respondent awareness. Assumptions and considerations regarding appropriate use of screening techniqu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
349
0
5

Year Published

2016
2016
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 471 publications
(381 citation statements)
references
References 17 publications
0
349
0
5
Order By: Relevance
“…There are many methods available with the potential to identify various forms of LQD (see, Curran, 2016;DeSimone et al, 2015). Some of these methods involve the direct assessment of response quality, others involve unobtrusive observation of respondent behavior patterns, and others require the calculation of statistical indicators.…”
Section: Methods For Detecting Lqdmentioning
confidence: 99%
See 2 more Smart Citations
“…There are many methods available with the potential to identify various forms of LQD (see, Curran, 2016;DeSimone et al, 2015). Some of these methods involve the direct assessment of response quality, others involve unobtrusive observation of respondent behavior patterns, and others require the calculation of statistical indicators.…”
Section: Methods For Detecting Lqdmentioning
confidence: 99%
“…It is important to emphasize that not all screens are appropriate for use in all studies. The suitability of screening techniques depends on survey design and methodology (Curran, 2016;DeSimone et al, 2015;Dunn et al, in press). For example, time-based screens require survey administrators to measure the time required for each respondent to complete the survey.…”
Section: Selection Of Screening Techniquesmentioning
confidence: 99%
See 1 more Smart Citation
“…Thus we excluded 106 participants (1.8%) who gave the same response between 2 and 5 to at least 24 RSS for clear IER, but retained 842 participants (14.4%) who chose the lowest response option, not at all/does not apply, for at least 24 questions. Preferring to retain invalid responses rather than exclude valid responses, we implemented this screening technique (long string) [75] more permissively than Johnson [76], who used it to exclude 3.5% of another Web-based survey's participants. Four of our samples would have set a lower threshold by our criterion: absolute minima within samples occurred first at 17 (n = 0), 17 (1), 19 (0), 21 (3), and 24 (0) identical responses.…”
Section: Exclusion Criteriamentioning
confidence: 99%
“…These translations were proofread by native speakers and all ambiguities resolved in collaboration with the translators before the survey was implemented online. The survey employed numerous quality measures to maximize data quality and screen out careless responses, including instruction-based attention filters ("Please select strongly agree"), bogus items ("I always sleep less than one hour per night"), response pattern indicators (e.g., straight-lining), time filters, and self-reported data quality checks (e.g., "I gave this study enough attention") [41,42]. Participants failing the instruction-based attention filters were eliminated automatically while those failing multiple quality checks were replaced.…”
Section: Data Collectionmentioning
confidence: 99%