2018
DOI: 10.1111/cgf.13444
|View full text |Cite
|
Sign up to set email alerts
|

Information Visualization Evaluation Using Crowdsourcing

Abstract: Visualization researchers have been increasingly leveraging crowdsourcing approaches to overcome a number of limitations of controlled laboratory experiments, including small participant sample sizes and narrow demographic backgrounds of study participants. However, as a community, we have little understanding on when, where, and how researchers use crowdsourcing approaches for visualization research. In this paper, we review the use of crowdsourcing for evaluation in visualization research. We analyzed 190 cr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
50
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 73 publications
(50 citation statements)
references
References 154 publications
0
50
0
Order By: Relevance
“…We found the highest number of incorrect responses for 12r (9%) and the lowest for 24l (2%). In four cases, users reported that the indicated values are equal when their value differences were rather low (1)(2)(3)(4)(5)(6)(7). This means, users underestimated the value differences.…”
Section: Task 5: Compare Am/pm Interval Valuesmentioning
confidence: 95%
See 3 more Smart Citations
“…We found the highest number of incorrect responses for 12r (9%) and the lowest for 24l (2%). In four cases, users reported that the indicated values are equal when their value differences were rather low (1)(2)(3)(4)(5)(6)(7). This means, users underestimated the value differences.…”
Section: Task 5: Compare Am/pm Interval Valuesmentioning
confidence: 95%
“…Therefore, it is sensible to hypothesize that there is no benefit for a 24-hours radial chart, as it suffers from known decoding deficiencies, and at the same time does not provide the clock metaphor benefit of the 12-hours radial chart. As our goal was to investigate how well visual encodings are understood and accepted by a general audience, we crowd-sourced our experiment using Amazon's Mechanical Turk system, commonly used to perform graphical perception studies [7,31,38]. Also, previous studies comparing time series visualizations have been conducted remotely through Mechanical Turk [2,11].…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Our experimental comparison of visualization design alternatives continues a tradition established by Cleveland and McGill [22] and continued by the crowdsourced graphical perception work of Heer and Bostock [31]. Our experiment also makes use of a crowdsourcing platform, which serves to circumvent some of the limitations of directly observed lab studies while providing a diverse and large participant pool [12,13]. Our work follows two recent crowdsourced experiments relating to visualization on mobile phones: our previous study comparing alternative ways to visualize ranges on mobile phones [17], and Schwab et al's study of panning and zooming techniques with temporal data [54].…”
Section: (Mobile) Visualization Evaluation Studiesmentioning
confidence: 99%