2017
DOI: 10.1109/tvcg.2017.2674978
|View full text |Cite
|
Sign up to set email alerts
|

Towards Perceptual Optimization of the Visual Design of Scatterplots

Abstract: Abstract-Designing a good scatterplot can be difficult for non-experts in visualization, because they need to decide on many parameters, such as marker size and opacity, aspect ratio, color, and rendering order. This paper contributes to research exploring the use of perceptual models and quality metrics to set such parameters automatically for enhanced visual quality of a scatterplot. A key consideration in this paper is the construction of a cost function to capture several relevant aspects of the human visu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
61
0
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 88 publications
(62 citation statements)
references
References 40 publications
0
61
0
1
Order By: Relevance
“…The number of papers reporting crowdsourcing experiments for InfoVis changed from one in 2009 to 13 in 2017 ( Figure 1). While papers in the early rise of crowdsourcing had to dedicate an entire section to justify the use of crowdsourcing for their empirical studies (e.g., [MDF12]), recent papers simply cite previous InfoVis papers to justify the use of crowdsourcing for evaluating data visualizations (e.g., [MPOW17]). In this section, we report on the findings from our analysis of the 190 experiments with respect to the aspects and dimensions discussed in Section 3.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…The number of papers reporting crowdsourcing experiments for InfoVis changed from one in 2009 to 13 in 2017 ( Figure 1). While papers in the early rise of crowdsourcing had to dedicate an entire section to justify the use of crowdsourcing for their empirical studies (e.g., [MDF12]), recent papers simply cite previous InfoVis papers to justify the use of crowdsourcing for evaluating data visualizations (e.g., [MPOW17]). In this section, we report on the findings from our analysis of the 190 experiments with respect to the aspects and dimensions discussed in Section 3.…”
Section: Resultsmentioning
confidence: 99%
“…For instance, the text could ask the user to click somewhere else (e.g., on a specific region of the visualization) and not the provided response buttons (e.g., [RSC15]); participants that click on the buttons are considered inattentive and their data is excluded from analysis. One paper presented its experiments as micro-games [MPOW17]: participants were initially informed that a reward will be granted only if a minimum of 30% of the questions were answered correctly, and then shown the score halfway through the experiment.…”
Section: Inattentive Participantsmentioning
confidence: 99%
See 2 more Smart Citations
“…Some researchers have suggested ensemble perception as an alternative to aggregation, where a person perceives distributional properties from groups of visual items spread over space or time [CG15, CH17, HRA15, KNKH18, SHGF16]. Accumulating evidence suggests that the visual system is capable of quickly, automatically and accurately extracting mental representations of ensembles [AO07, Alv11, HW09, HZ84, HWB15, LKW16, MPOW17]. Visualization researchers have examined how well people can infer statistics from encodings that vary in their level of aggregation for particular types of data, like time series [ARH12, ACG14], multi‐class scatterplots [GCNF13] or hierarchical data sets [EF10].…”
Section: Related Workmentioning
confidence: 99%