2014
DOI: 10.1002/mpr.1415
|View full text |Cite
|
Sign up to set email alerts
|

Strategies to address participant misrepresentation for eligibility in Web‐based research

Abstract: Emerging methodological research suggests that the World Wide Web (“Web”) is an appropriate venue for survey data collection, and a promising area for delivering behavioral intervention. However, the use of the Web for research raises concerns regarding sample validity, particularly when the Web is used for recruitment and enrollment. The purpose of this paper is to describe the challenges experienced in two different Web-based studies in which participant misrepresentation threatened sample validity: a survey… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
116
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 107 publications
(116 citation statements)
references
References 23 publications
0
116
0
Order By: Relevance
“…A total of 175 participants were screened out of the study due to not meeting eligibility criteria (n = 164) or due to endorsing inconsistent responses on items we included to remove individuals posing as veterans to obtain incentives (e.g., branch, rank, pay grade, and age needed to be consistent; n = 11). Methods such as these have been successful in weeding out misrepresenters and validating data from Internet studies in prior work (Kramer et al, 2014; Pedersen et al, 2015) and are described in more detail for this study in other work (Pedersen et al, 2016c). Of the 1,002 participants that began the baseline survey, 200 did not progress past initial questions on the survey and 9 did not pass verification checks (e.g., same participant attempted to access the survey multiple times, participant completed the survey too quickly to be valid), resulting in 793 randomized to either PNF based on gender-specific peers (n = 393) or attention control feedback about gender-specific peers’ video game playing behavior (n = 400).…”
Section: Methodsmentioning
confidence: 99%
“…A total of 175 participants were screened out of the study due to not meeting eligibility criteria (n = 164) or due to endorsing inconsistent responses on items we included to remove individuals posing as veterans to obtain incentives (e.g., branch, rank, pay grade, and age needed to be consistent; n = 11). Methods such as these have been successful in weeding out misrepresenters and validating data from Internet studies in prior work (Kramer et al, 2014; Pedersen et al, 2015) and are described in more detail for this study in other work (Pedersen et al, 2016c). Of the 1,002 participants that began the baseline survey, 200 did not progress past initial questions on the survey and 9 did not pass verification checks (e.g., same participant attempted to access the survey multiple times, participant completed the survey too quickly to be valid), resulting in 793 randomized to either PNF based on gender-specific peers (n = 393) or attention control feedback about gender-specific peers’ video game playing behavior (n = 400).…”
Section: Methodsmentioning
confidence: 99%
“…To be eligible, individuals needed to be age 18 or older, and on active duty in the U.S. Armed Forces after September 2001 but not presently. We excluded individuals who completed surveys in less than five minutes, had more than one survey response, or incorrectly answered a military-related 'insider knowledge' question (to reduce chance of online survey misrepresentation) [19,27]. Survey completion was defined as those respondents who reached the end of the online survey and were not excluded based on the above quality control measures.…”
Section: Methods Participantsmentioning
confidence: 99%
“…We developed a monitoring system for every online enrollment to prevent fraudulent participants. Other procedural/design, technical/software, and data analysis strategies have been suggested [58] eMoms was not designed to test whether electronically-mediated interventions are more effective in influencing weight outcomes than supervised or tele-health interventions. As a consequence, when analyzing the result of t, the effect of the intervention in and of itself and the effect of the method of intervention delivery (electronic media) will not be disentangled.…”
Section: Discussionmentioning
confidence: 99%