We take a new, scenario-based look at evaluation in information visualization. Our seven scenarios, evaluating visual data analysis and reasoning, evaluating user performance, evaluating user experience, evaluating environments and work practices, evaluating communication through visualization, evaluating visualization algorithms, and evaluating collaborative data analysis were derived through an extensive literature review of over 800 visualization publications. These scenarios distinguish different study goals and types of research questions and are illustrated through example studies. Through this broad survey and the distillation of these scenarios, we make two contributions. One, we encapsulate the current practices in the information visualization research community and, two, we provide a different approach to reaching decisions about what might be the most effective evaluation of a given information visualization. Scenarios can be used to choose appropriate research questions and goals and the provided examples can be consulted for guidance on how to design one's own study.
Interaction cost is an important but poorly understood factor in visualization design. We propose a framework of interaction costs inspired by Norman's Seven Stages of Action to facilitate study. From 484 papers, we collected 61 interaction-related usability problems reported in 32 user studies and placed them into our framework of seven costs: (1) Decision costs to form goals; (2) System-power costs to form system operations; (3) Multiple input mode costs to form physical sequences; (4) Physical-motion costs to execute sequences; (5) Visual-cluttering costs to perceive state; (6) View-change costs to interpret perception; (7) State-change costs to evaluate interpretation. We also suggested ways to narrow the gulfs of execution (2-4) and evaluation (5-7) based on collected reports. Our framework suggests a need to consider decision costs (1) as the gulf of goal formation.
In this paper, we would like to investigate how people make sense of unfamiliar information visualizations. In order to achieve the research goal, we conducted a qualitative study by observing 13 participants when they endeavored to make sense of three unfamiliar visualizations (i.e., a parallel-coordinates plot, a chord diagram, and a treemap) that they encountered for the first time. We collected data including audio/video record of think-aloud sessions and semi-structured interview; and analyzed the data using the grounded theory method. The primary result of this study is a grounded model of NOvice's information Vlsualization Sensemaking (NOVIS model), which consists of the five major cognitive activities: 1 encountering visualization, 2 constructing a frame, 3 exploring visualization, 4 questioning the frame, and 5 floundering on visualization. We introduce the NOVIS model by explaining the five activities with representative quotes from our participants. We also explore the dynamics in the model. Lastly, we compare with other existing models and share further research directions that arose from our observations.
Visualization researchers and practitioners engaged in generating or evaluating designs are faced with the difficult problem of transforming the questions asked and actions taken by target users from domain-specific language and context into more abstract forms. Existing abstract task classifications aim to provide support for this endeavour by providing a carefully delineated suite of actions. Our experience is that this bottom-up approach is part of the challenge: low-level actions are difficult to interpret without a higher-level context of analysis goals and the analysis process. To bridge this gap, we propose a framework based on analysis reports derived from open-coding 20 design study papers published at IEEE InfoVis 2009-2015, to build on the previous work of abstractions that collectively encompass a broad variety of domains. The framework is organized in two axes illustrated by nine analysis goals. It helps situate the analysis goals by placing each goal under axes of specificity (Explore, Describe, Explain, Confirm) and number of data populations (Single, Multiple). The single-population types are Discover Observation, Describe Observation, Identify Main Cause, and Collect Evidence. The multiple-population types are Compare Entities, Explain Differences, and Evaluate Hypothesis. Each analysis goal is scoped by an input and an output and is characterized by analysis steps reported in the design study papers. We provide examples of how we and others have used the framework in a top-down approach to abstracting domain problems: visualization designers or researchers first identify the analysis goals of each unit of analysis in an analysis stream, and then encode the individual steps using existing task classifications with the context of the goal, the level of specificity, and the number of populations involved in the analysis.
In interfaces that provide multiple visual information resolutions (VIR), low-VIR overviews typically sacrifice visual details for display capacity, with the assumption that users can select regions of interest to examine at higher VIRs. Designers can create low-VIRs based on multi-level structure inherent in the data, but have little guidance with single-level data. To better guide design tradeoff between display capacity and visual target perceivability, we looked at overview use in two multiple-VIR interfaces with high-VIR displays either embedded within, or separate from, the overviews. We studied two visual requirements for effective overview and found that participants would reliably use the low-VIR overviews only when the visual targets were simple and had small visual spans. Otherwise, at least 20% chose to use the high-VIR view exclusively. Surprisingly, neither of the multiple-VIR interfaces provided performance benefits when compared to using the high-VIR view alone. However, we did observe benefits in providing side-by-side comparisons for target matching. We conjecture that the high cognitive load of multiple-VIR interface interactions, whether real or perceived, is a more considerable barrier to their effective use than was previously considered.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.