Proceedings of the Workshop on Human-in-the-Loop Data Analytics 2018
DOI: 10.1145/3209900.3209901
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating Visual Data Analysis Systems

Abstract: Visual data analysis is a key tool for helping people to make sense of and interact with massive data sets. However, existing evaluation methods (e.g., database benchmarks, individual user studies) fail to capture the key points that make systems for visual data analysis (or visual data systems) challenging to design. In November 2017, members of both the Database and Visualization communities came together in a Dagstuhl seminar to discuss the grand challenges in the intersection of data analysis and interacti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 16 publications
(11 citation statements)
references
References 28 publications
0
11
0
Order By: Relevance
“…In this work, we present initial steps towards an end-to-end performance benchmark for interactive, real-time querying scenarios, derived from user study data that we collected for crossfilter contexts (a representative of dynamic queries). Our benchmark design is inspired by recent proposals [6,17] and tutorials [28,29,61] across multiple communities, and incorporates methodology from HCI, visualization, and databases. We ran our benchmark with 128 workflows and five different DBMSs, and found that none of these systems could adequately support our benchmark workload for datasets considered far from large in the database community.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…In this work, we present initial steps towards an end-to-end performance benchmark for interactive, real-time querying scenarios, derived from user study data that we collected for crossfilter contexts (a representative of dynamic queries). Our benchmark design is inspired by recent proposals [6,17] and tutorials [28,29,61] across multiple communities, and incorporates methodology from HCI, visualization, and databases. We ran our benchmark with 128 workflows and five different DBMSs, and found that none of these systems could adequately support our benchmark workload for datasets considered far from large in the database community.…”
Section: Discussionmentioning
confidence: 99%
“…This subsection defines key HCI and visualization terms related to the user study, summarized from prior work (e.g., [6,9,24,34]): A goal is a high level concept, representing an end state or result that the user wishes to achieve. To accomplish a goal, the user executes a set of tasks, which may include solving a specific problem or selecting among different alternatives.…”
Section: Definitions and Terminologymentioning
confidence: 99%
See 2 more Smart Citations
“…We propose a novel approach to explore different/alternative definitions of the performance models while looking at their differences, hypothesize and test new performance models, and illustrate the results of these analyses in an interactive platform (see Battle et al 2018).…”
Section: Aim and Contributionmentioning
confidence: 99%