2020
DOI: 10.3390/e23010018
|View full text |Cite
|
Sign up to set email alerts
|

Explainable AI: A Review of Machine Learning Interpretability Methods

Abstract: Recent advances in artificial intelligence (AI) have led to its widespread industrial adoption, with machine learning systems demonstrating superhuman performance in a significant number of tasks. However, this surge in performance, has often been achieved through increased model complexity, turning such systems into “black box” approaches and causing uncertainty regarding the way they operate and, ultimately, the way that they come to decisions. This ambiguity has made it problematic for machine learning syst… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
910
1
10

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 1,582 publications
(925 citation statements)
references
References 115 publications
4
910
1
10
Order By: Relevance
“…However, even with adequate sampling of natural variability in the training dataset, the underestimation of the precipitation response to natural forcings such as volcanic activities and natural variability such as El Nino Southern Oscillation in GCMs could still affect the results 62 . We also note that different ANN visualization techniques are available [63][64][65] , and those should be explored to understand the sensitivity of the extracted fingerprints to the ANN visualization technique. Despite these limitations, it is clear that ANN DAI methods with ANN visualization techniques are very useful and efficient in identifying the human influence on variables that are highly uncertain in GCMs, and poorly characterized in observations, such as extreme precipitation.…”
Section: Discussionmentioning
confidence: 99%
“…However, even with adequate sampling of natural variability in the training dataset, the underestimation of the precipitation response to natural forcings such as volcanic activities and natural variability such as El Nino Southern Oscillation in GCMs could still affect the results 62 . We also note that different ANN visualization techniques are available [63][64][65] , and those should be explored to understand the sensitivity of the extracted fingerprints to the ANN visualization technique. Despite these limitations, it is clear that ANN DAI methods with ANN visualization techniques are very useful and efficient in identifying the human influence on variables that are highly uncertain in GCMs, and poorly characterized in observations, such as extreme precipitation.…”
Section: Discussionmentioning
confidence: 99%
“…Weber et al [97] presented a comprehensive review of machine learning interpretability methods using 4 categories -(i) Methods for explaining black box models (17 methods focused on DL method interpretations, 16 methods that can explain any black-box model), (ii) methods for creating white-box models (5 methods), (iii) methods that promote fairness and restrict discrimination, (iv) methods that analyses the sensitivity of model predictions (28 methods). Most interpretability methods are focused on DL, largely ruled by neural networks, and are experimented with image classification explanation.…”
Section: A Year-wise ML Methods Published In Aml Domain B Interpretability Of Models Used In Aml Solutions C Machine Learning Techniques mentioning
confidence: 99%
“…First, it provides a lower bound of the classification accuracy for the more elaborate, eg. neural network based classifiers, while being entirely explainable [15]. Second, relying on the simple heuristics they are much less computationally challenging, so can be effortlessly implemented as online triggers working with limited resources of smartphones.…”
Section: Baseline Triggermentioning
confidence: 99%