Despite typically receiving little emphasis in visualization research, interaction in visualization is the catalyst for the user's dialogue with the data, and, ultimately, the user's actual understanding and insight into this data. There are many possible reasons for this skewed balance between the visual and interactive aspects of a visualization. One reason is that interaction is an intangible concept that is difficult to design, quantify, and evaluate. Unlike for visual design, there are few examples that show visualization practitioners and researchers how to best design the interaction for a new visualization. In this paper, we attempt to address this issue by collecting examples of visualizations with "best-in-class" interaction and using them to extract practical design guidelines for future designers and researchers. We call this concept fluid interaction, and we propose a operational definition in terms of the direct manipulation and embodied interaction paradigms, the psychological concept of "flow", and Norman's gulfs of execution and evaluation.
We extend the popular force-directed approach to network (or graph) layout to allow separation constraints, which enforce a minimum horizontal or vertical separation between selected pairs of nodes. This simple class of linear constraints is expressive enough to satisfy a wide variety of application-specific layout requirements, including: layout of directed graphs to better show flow; layout with non-overlapping node labels; and layout of graphs with grouped nodes (called clusters). In the stress majorization force-directed layout process, separation constraints can be treated as a quadratic programming problem. We give an incremental algorithm based on gradient projection for efficiently solving this problem. The algorithm is considerably faster than using generic constraint optimization techniques and is comparable in speed to unconstrained stress majorization. We demonstrate the utility of our technique with sample data from a number of practical applications including gene-activation networks, terrorist networks and visualization of high-dimensional data.
With increasing physicians' workload and patients' needs for care, there is a need for technology that facilitates physicians work and performs continues follow-up with patients. Existing approaches focus merely on improving patient's condition, and none have considered managing physician's workload. This paper presents an initial evaluation of a conversational agent assisted coaching platform intended to manage physicians' fatigue and provide continuous follow-up to patients. We highlight the approach adapted to build the chatbot dialogue and the coaching platform. We will particularly discuss the activity recommender algorithms used to suggest insights about patients' condition and activities based on previously collected data. The paper makes three contributions: (1) present the conversational agent as an assistive virtual coach,(2) decrease physicians workload and continuous follow up with patients, all by handling some repetitive physician tasks and performing initial follow up with the patient, (3) present the activity recommender that tracks previous activities and patient information and provides useful insights about possible activity and patient match to the coach. Future work focuses on integrating the recommender model with the CoachAI platform and test the prototype with patient's in collaboration with an ambulatory clinic.
In this position paper we encourage the use of eye tracking measurements to investigate users' cognitive load while interacting with a system. We start with an overview of how eye movements can be interpreted to provide insight about cognitive processes and present a descriptive model representing the relations of eye movements and cognitive load. Then, we discuss how specific characteristics of human-computer interaction (HCI) interfere with the model and impede the application of eye tracking data to measure cognitive load in visual computing. As a result, we present a refined model, embedding the characteristics of HCI into the relation of eye tracking data and cognitive load. Based on this, we argue that eye tracking should be considered as a valuable instrument to analyze cognitive processes in visual computing and suggest future research directions to tackle outstanding issues.
Figure 1. Exemplars of Human-Computer Integration: extending the body with additional robotic arms; [70] embedding computation into the body using electric muscle stimulation to manipulate handwriting [48]; and, a tail extension controlled by body movements [86].
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.