2012
DOI: 10.3758/s13423-012-0296-9
|View full text |Cite
|
Sign up to set email alerts
|

Is the Web as good as the lab? Comparable performance from Web and lab in cognitive/perceptual experiments

Abstract: With the increasing sophistication and ubiquity of the Internet, behavioral research is on the cusp of a revolution that will do for population sampling what the computer did for stimulus control and measurement. It remains a common assumption, however, that data from self-selected Web samples must involve a trade-off between participant numbers and data quality. Concerns about data quality are heightened for performance-based cognitive and perceptual measures, particularly those that are timed or that involve… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

32
490
3
1

Year Published

2014
2014
2021
2021

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 607 publications
(526 citation statements)
references
References 34 publications
32
490
3
1
Order By: Relevance
“…They used the AMT crowdsourcing platform to recruit participants, rather than the subject pool employed by Dandurand et al Opening a study to the Internet at large is more risky than using a well-known subject pool, but Paolacci et al still came to the same conclusion: data collected online were equivalent to those collected in the lab. This equivalency has also been demonstrated for surveybased investigations (Buhrmester et al, 2011) and other experimental paradigms (e.g., Germine et al, 2012;Heer & Bostock, 2010;Mason & Suri, 2012).…”
Section: Online Data Collectionmentioning
confidence: 84%
See 1 more Smart Citation
“…They used the AMT crowdsourcing platform to recruit participants, rather than the subject pool employed by Dandurand et al Opening a study to the Internet at large is more risky than using a well-known subject pool, but Paolacci et al still came to the same conclusion: data collected online were equivalent to those collected in the lab. This equivalency has also been demonstrated for surveybased investigations (Buhrmester et al, 2011) and other experimental paradigms (e.g., Germine et al, 2012;Heer & Bostock, 2010;Mason & Suri, 2012).…”
Section: Online Data Collectionmentioning
confidence: 84%
“…We are concerned with how differences between crowds and traditional populations (i.e., crowds as a phenomenon) can affect the results of studies that seek to understand human behaviour. We situate this work in the space of comparative online-offline studies (e.g., Dandurand et al, 2008;Germine et al, 2012;Komarov et al, 2013).…”
Section: Introductionmentioning
confidence: 99%
“…Crump et al (2013) were able to replicate findings of popular reaction times tasks requiring participant attention in AMT to that of traditional laboratory settings. In an online cognitive and perceptual experiment by Germine et al (2012), they found that their data from challenging timed tasks were similar in quality to performance data that can be gathered in a laboratory setting despite being anonymous, uncompensated, and unsupervised.…”
Section: Stimulimentioning
confidence: 99%
“…These studies have found that crowdsourcing websites often allow for the recruitment of more diverse and representative participants than in many lab settings, 3 and provide results that are as reliable as lab-based experiments. 4 In addition to the replication of linguistic results, as noted above, other experimental results in the social sciences have also been replicated, for example the Stroop, Switching, Flanker, Simon, Posner Cuing, attentional blink, subliminal priming, and category learning tasks, classical experimental tasks drawn from the heuristics and biases literature, psychometric data, and even clinical findings (Gosling, Vazire, Srivastava, and John 2004;Ipeirotis, 2010;Ipeirotis, Provost, and Wang 2010;Paolacci, Chandler, and Ipeirotis 2010;Buhrmester, Kwang, and Gosling, 2011;Horton, Rand, and Zeckhauser 2011;Mason and Siddharth 2011;Berinsky, Huber, Lenz 2012;Germine, Nakayama, Duchaine, Chabris, Chatterjee, and Wilmer 2012;Crump, McDonnell, and Gureckis 2013;Shapiro, Chandler, and Mueller 2013;and references therein). We note that these studies have also found limitations on the use of crowdsourcing for experimental studies, in particular when an experiment is excessively long or when insufficient compensation is offered.…”
Section: Online Crowdsourcing For Linguistic Experimentsmentioning
confidence: 99%