2020
DOI: 10.31234/osf.io/jfeca
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Online Timing Accuracy and Precision: A comparison of platforms, browsers, and participant's devices

Abstract: Due to its increasing ease-of-use and ability to quickly collect large samples, online behavioral research is currently booming. With this increasing popularity, it is important that researchers are aware of who online participants are, and what devices and software they use to access experiments. While it is somewhat obvious that these factors can impact data quality, it remains unclear how big this problem is. To understand how these characteristics impact experiment presentation and data quality, we perform… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
59
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 47 publications
(60 citation statements)
references
References 25 publications
0
59
1
Order By: Relevance
“…Notably, these means are inflated by particularly bad performance using the Safari browser and Mac OS X. The offline-based comparison, PsychoPy and Opensesame achieved precisions of 1 ms to 4 ms, with only minor exceptions [60,61], most notably with audio playback.…”
Section: Data Quality Concernsmentioning
confidence: 99%
See 1 more Smart Citation
“…Notably, these means are inflated by particularly bad performance using the Safari browser and Mac OS X. The offline-based comparison, PsychoPy and Opensesame achieved precisions of 1 ms to 4 ms, with only minor exceptions [60,61], most notably with audio playback.…”
Section: Data Quality Concernsmentioning
confidence: 99%
“…Additionally, modern screen refresh rates are almost exclusively set to 60 Hz (de facto standard), making certain specifications of online studies a bit more predictable. Among others [57][58][59], two recent large studies [60,61] investigated timing precision (unintended variability in stimulus presentation) of several online and offline solutions. The online-based comparison found good overall precision for Gorilla (13 ms), jsPsych (26 ms), PsychoJS (−6 ms) and lab.js (10 ms).…”
Section: Data Quality Concernsmentioning
confidence: 99%
“…In general, latencies and variabilities are higher in web-based compared to lab-environments. Several studies have assessed the quality of timing in online studies, with encouraging results (Anwyl-Irvine, Dalmaijer, et al, 2020;Bridges et al, 2020;Pronk et al, 2019;Reimers & Stewart, 2015). An online evaluation of a masked priming experiment showed that very short stimulus durations (i.e., under 50ms) can be problematic (but see Barnhoorn et al, 2014), but other classic experimental psychology paradigms that rely on reaction times (e.g., Stroop, flanker, and Simon tasks)…”
Section: Frequently Asked Questionsmentioning
confidence: 99%
“…Indeed, when an experimental task is run online, technical criticisms (that cannot be solved remotely) in terms of timing (such as the accurate timing of visual stimuli presentation, or of the subjective responses) and mostly related to the participants' bandwidth should be considered. The discussion of such timing issues is out of the scope of the present manuscript; however, further comments on the constraints of online behavioral tasks were reported by Crump et al (2013) and, more recently, by Anwyl-Irvine et al (2020). Because of the criticisms on the RT, it is highly recommendable to rate the individual's performance accordingly to an index (such as the percentage of the level of Accuracy) registered independently from the timing.…”
Section: Discussionmentioning
confidence: 94%