2016
DOI: 10.48550/arxiv.1612.08468
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
82
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 54 publications
(82 citation statements)
references
References 8 publications
0
82
0
Order By: Relevance
“…Taking our dataset as an example, TT MOD is somewhat correlated with other travel time variables, since the origin-destination distance is the same. To overcome this issue, a potential direction is to apply the accumulated local effects (ALE) plot (Apley, 2016), by only visualizing how the model predictions change in a small "window" around a particular feature value. This idea can be applied to PDPs, ICE plots, and their generalizations proposed in the paper.…”
Section: Discussionmentioning
confidence: 99%
“…Taking our dataset as an example, TT MOD is somewhat correlated with other travel time variables, since the origin-destination distance is the same. To overcome this issue, a potential direction is to apply the accumulated local effects (ALE) plot (Apley, 2016), by only visualizing how the model predictions change in a small "window" around a particular feature value. This idea can be applied to PDPs, ICE plots, and their generalizations proposed in the paper.…”
Section: Discussionmentioning
confidence: 99%
“…Alibi provides a set of counterfactual explanations, such as cem, and, interestingly, an implementation of anchor [103]. Regarding global explanation methods, Alibi contains ale (Accumulated Local Effects) [11], which is a method based on partial dependence plots [59]. FAT-Forensics takes into account fairness, accountability and transparency.…”
Section: Explanation Toolboxesmentioning
confidence: 99%
“…For reproducibility reasons, we fixed the random seed 11. We refer the intrested reader to: https://christophm.github.io/interpretable-ml-book/shapley.…”
mentioning
confidence: 99%
“…Motivated by (Apley, 2016), we propose the gradient tracking technique to achieve the input variable selection for a selected output variable for a particular sample for local interpretability. This is done through tracking back the gradient of the identified output variable y kj according to each individual input variable by backpropagation through the output linkage function y kj = g kj (h k ; θ g k ) and the state transition matrix…”
Section: Improve Model Global Interpretabilitymentioning
confidence: 99%