Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems 2019
DOI: 10.1145/3290605.3300911
|View full text |Cite
|
Sign up to set email alerts
|

ATMSeer

Abstract: To relieve the pain of manually selecting machine learning algorithms and tuning hyperparameters, automated machine learning (AutoML) methods have been developed to automatically search for good models. Due to the huge model search space, it is impossible to try all models. Users tend to distrust automatic results and increase the search budget as much as they can, thereby undermining the efficiency of AutoML. To address these issues, we design and implement ATMSeer, an interactive visualization tool that supp… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 80 publications
(18 citation statements)
references
References 29 publications
0
18
0
Order By: Relevance
“…The AutoML paradigm of ML development will often (almost by definition) be unable to bring into account all the requisite factors, perspectives and constraints (which often require some domain expertise) that would otherwise need to be reconciled in addressing issues of fairness. Today's AutoML systems tend to limit customers' interaction with the system to a few specific decision points and present only limited to no information on how these systems work in operation and the complex processes behind the ML models generation and selection [86,[146][147][148]153]. This 'blackbox' nature of AutoML operation results in a situation where users will often be unable to understand how and why AutoML systems make the choices they make [148], thus hindering their ability to effectively reason about and mitigate potential biases embedded in their outputs.…”
Section: Compas (Race)mentioning
confidence: 99%
“…The AutoML paradigm of ML development will often (almost by definition) be unable to bring into account all the requisite factors, perspectives and constraints (which often require some domain expertise) that would otherwise need to be reconciled in addressing issues of fairness. Today's AutoML systems tend to limit customers' interaction with the system to a few specific decision points and present only limited to no information on how these systems work in operation and the complex processes behind the ML models generation and selection [86,[146][147][148]153]. This 'blackbox' nature of AutoML operation results in a situation where users will often be unable to understand how and why AutoML systems make the choices they make [148], thus hindering their ability to effectively reason about and mitigate potential biases embedded in their outputs.…”
Section: Compas (Race)mentioning
confidence: 99%
“…DF‐Seer [SFC * 20] as a model selection tool for demand forecasting models supports performance analysis, particularly on different products and time periods. Other designs that concern model performance comparison of multiple models mostly appear in visual analytics systems for model ensembling [SJS * 21, CMKK21a, CMKK21b,XXM * 19] and AutoML [WMJ * 19,NZL * 21].…”
Section: Related Workmentioning
confidence: 99%
“…As machine learning is becoming an integral part of our lives, researchers have been investigating the biases that could arise and how to mitigate them [5,44]. Work on bias mitigation looked into the interpretability and transparency of these models [27,59,60], what industry practitioners need to improve the fairness in ML systems [29,30], and the perceived fairness of biased algorithms in current practices [26,51]. In our work, we looked into how much bias and fairness in algorithms is communicated as an aspect of ML models' quality and who within teams and organizations is interested in this.…”
Section: Algorithmic Biasmentioning
confidence: 99%