2013
DOI: 10.1007/978-3-642-32378-2_8
|View full text |Cite
|
Sign up to set email alerts
|

Safe and Interpretable Machine Learning: A Methodological Review

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 38 publications
(21 citation statements)
references
References 9 publications
0
21
0
Order By: Relevance
“…People are working on more interpretable methods, but there seem to tradeoffs involved in making such systems more interpretable (Vellido, Guerrero and Lisboa 2012;Lisboa 2013;Otte 2013;Chase Lipton 2015;Zeng, Ustun and Rudin 2015). 22 Finally, compounding these two problems, there is the fact algorithms are not singular phenomena.…”
Section: And Why Wouldn't We If We Are Promised Slimmer Waistlines mentioning
confidence: 99%
“…People are working on more interpretable methods, but there seem to tradeoffs involved in making such systems more interpretable (Vellido, Guerrero and Lisboa 2012;Lisboa 2013;Otte 2013;Chase Lipton 2015;Zeng, Ustun and Rudin 2015). 22 Finally, compounding these two problems, there is the fact algorithms are not singular phenomena.…”
Section: And Why Wouldn't We If We Are Promised Slimmer Waistlines mentioning
confidence: 99%
“…Doshi‐Velez and Kim () link interpretability to the need for an ML system to satisfy auxiliary criteria , i.e., criteria that are in part qualitative and cannot be satisfied by improved training (unlike, say, accuracy). While many examples are given by the authors (and others, e.g., Lipton, ) — including being nondiscriminatory (as in fairness), safety (Otte, ), and satisfying a user's right to explanation (as in Goodman & Flaxman, ) — there does not yet appear to be a comprehensive typology of these kinds of auxiliary criteria.…”
Section: Interpretability In Ml‐based Ai Systemsmentioning
confidence: 99%
“…Some of the most successful ML methods, such as Artificial Neural Networks (ANN) and deep learning techniques, suffer from opaqueness of models, which cannot be interpreted by human experts and therefore cannot explain reasons for the outcomes they provide. This is a serious issue for ML adoption in all those sectors which require accountability of decisions and robustness of outputs against accidental or voluntary input manipulation [15,24]. Research efforts to build decipherable results of ML techniques and systems are therefore growing.…”
Section: Related Workmentioning
confidence: 99%
“…Research efforts to build decipherable results of ML techniques and systems are therefore growing. A conceptually simple approach is to exploit ensemble learning combining multiple lowdimensional submodels, where each individual submodel is simple enough to be verifiable by domain experts [24]. In [17] Bayesian learning was used to generate lists of rules in the if .…”
Section: Related Workmentioning
confidence: 99%