Figure 1: An analyst is using Constellations to investigate results generated by previous analysts. Constellations organizes these visualizations with projection and clustering. Adjusting the data coverage, encoding choice, and keywords sliders changes how pairwise chart similarities are scored and updates the projected layout and cluster groupings. Several charts are tagged to show how their positions change. AbstractMany data problems in the real world are complex and require multiple analysts working together to uncover embedded insights by creating chart-driven data stories. How, as a subsequent analysis step, do we interpret and learn from these collections of charts? We present Chart Constellations, a system to interactively support a single analyst in the review and analysis of data stories created by other collaborative analysts. Instead of iterating through the individual charts for each data story, the analyst can project, cluster, filter, and connect results from all users in a meta-visualization approach. Constellations supports deriving summary insights about prior investigations and supports the exploration of new, unexplored regions in the dataset. To evaluate our system, we conduct a user study comparing it against data science notebooks. Results suggest that Constellations promotes the discovery of both broad and high-level insights, including theme and trend analysis, subjective evaluation, and hypothesis generation.
Building effective classifiers requires providing the modeling algorithms with information about the training data and modeling goals in order to create a model that makes proper tradeoffs. Machine learning algorithms allow for flexible specification of such meta‐information through the design of the objective functions that they solve. However, such objective functions are hard for users to specify as they are a specific mathematical formulation of their intents. In this paper, we present an approach that allows users to generate objective functions for classification problems through an interactive visual interface. Our approach adopts a semantic interaction design in that user interactions over data elements in the visualization are translated into objective function terms. The generated objective functions are solved by a machine learning solver that provides candidate models, which can be inspected by the user, and used to suggest refinements to the specifications. We demonstrate a visual analytics system QUESTO for users to manipulate objective functions to define domain‐specific constraints. Through a user study we show that QUESTO helps users create various objective functions that satisfy their goals.
Visual data storytelling is gaining importance as a means of presenting data-driven information or analysis results, especially to the general public. This has resulted in design principles being proposed for data-driven storytelling, and new authoring tools being created to aid such storytelling. However, data analysts typically lack sufficient background in design and storytelling to make effective use of these principles and authoring tools. To assist this process, we present ChartStory for crafting data stories from a collection of user-created charts, using a style akin to comic panels to imply the underlying sequence and logic of data-driven narratives. Our approach is to operationalize established design principles into an advanced pipeline which characterizes charts by their properties and similarity, and recommends ways to partition, layout, and caption story pieces to serve a narrative. ChartStory also augments this pipeline with intuitive user interactions for visual refinement of generated data comics. We extensively and holistically evaluate ChartStory via a trio of studies. We first assess how the tool supports data comic creation in comparison to a manual baseline tool. Data comics from this study are subsequently compared and evaluated to ChartStory's automated recommendations by a team of narrative visualization practitioners. This is followed by a pair of interview studies with data scientists using their own datasets and charts who provide an additional assessment of the system. We find that ChartStory provides cogent recommendations for narrative generation, resulting in data comics that compare favorably to manually-created ones.
How can we design Natural Language Processing (NLP) systems that learn from human feedback? There is a growing research body of Human-in-the-loop (HITL) NLP frameworks that continuously integrate human feedback to improve the model itself. HITL NLP research is nascent but multifarious-solving various NLP problems, collecting diverse feedback from different people, and applying different methods to learn from collected feedback. We present a survey of HITL NLP work from both Machine Learning (ML) and Human-Computer Interaction (HCI) communities that highlights its short yet inspiring history, and thoroughly summarize recent frameworks focusing on their tasks, goals, human interactions, and feedback learning methods. Finally, we discuss future directions for integrating human feedback in the NLP development loop.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.