26th International Conference on Intelligent User Interfaces 2021
DOI: 10.1145/3397481.3450658
|View full text |Cite
|
Sign up to set email alerts
|

Model LineUpper: Supporting Interactive Model Comparison at Multiple Levels for AutoML

Abstract: Automated Machine Learning (AutoML) is a rapidly growing set of technologies that automate the model development pipeline by searching model space and generating candidate models. A critical, final step of AutoML is human selection of a final model from dozens of candidates. In current AutoML systems, selection is supported only by performance metrics. Prior work has shown that in practice, people evaluate ML models based on additional criteria, such as the way a model makes predictions. Comparison may happen … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
12
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
5
5

Relationship

2
8

Authors

Journals

citations
Cited by 21 publications
(12 citation statements)
references
References 25 publications
(24 reference statements)
0
12
0
Order By: Relevance
“…One recent research thread, Interactive Machine Learning [21], is a promising solution to the "black box" issue of AI systems. Interactive machine learning often relies on advanced visualization and interaction techniques (e.g., [18,51,86]), allowing users to monitor, interpret, and intervene the machine learning process so that they can establish trust toward the algorithm outputs. In a similar vein, a group of Google researchers recently built an experimental visualization system to help physicians understand the AI interpretations of breast cancer imaging data [10].…”
Section: Issues In the Implementation Of Ai-cdssmentioning
confidence: 99%
“…One recent research thread, Interactive Machine Learning [21], is a promising solution to the "black box" issue of AI systems. Interactive machine learning often relies on advanced visualization and interaction techniques (e.g., [18,51,86]), allowing users to monitor, interpret, and intervene the machine learning process so that they can establish trust toward the algorithm outputs. In a similar vein, a group of Google researchers recently built an experimental visualization system to help physicians understand the AI interpretations of breast cancer imaging data [10].…”
Section: Issues In the Implementation Of Ai-cdssmentioning
confidence: 99%
“…We can find support for this context-dependency in recent HCI works studying different XAI systems. For example, model debugging tools often integrate detailed local and global explanations (Narkar et al 2021;Hohman et al 2019). Decision-makers were found to have less desire for global explanations during time-constrained decisions, and prefer less distracting information (Xie et al 2020).…”
Section: Xai Evaluationmentioning
confidence: 99%
“…explainability, debugging user-interfaces, etc.) are used [35], or could be used [20], and how to design them [4,52]. We build a design probe 2 in the shape of a user-interface by performing literature studies, a formative study, and co-creation sessions consisting of 18 interviews, to explore uses of explainability for debugging.…”
Section: Introductionmentioning
confidence: 99%