2021
DOI: 10.1007/978-3-030-68796-0_5
|View full text |Cite
|
Sign up to set email alerts
|

Random Forest Model and Sample Explainer for Non-experts in Machine Learning – Two Case Studies

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 10 publications
0
5
0
Order By: Relevance
“…This is why they are called "sample-based" methods. 13 In the following, we will briefly explain the "black box" methods and focus on "white box" methods in image classification tasks.…”
Section: Topology Of Explanation Methods For Image Classification Tasksmentioning
confidence: 99%
See 1 more Smart Citation
“…This is why they are called "sample-based" methods. 13 In the following, we will briefly explain the "black box" methods and focus on "white box" methods in image classification tasks.…”
Section: Topology Of Explanation Methods For Image Classification Tasksmentioning
confidence: 99%
“…To improve the stability of the LRP-0 rule, a small positive term ε is added to the denominator as shown in Eq. (13). The ε term also reduces the flow of the relevance if the activation of the neuron is very small or there is a weak connection between the two neurons.…”
Section: Layer-wise Relevance Propagationmentioning
confidence: 99%
“…This is why they are called "sample-based" methods. 11 In the following, we will briefly explain the "Black box" methods and focus on "White box" methods in image classification tasks.…”
Section: Topology Of Explanation Methods For Image Classification Tasksmentioning
confidence: 99%
“…The delta between the original and the tweaked value represents the "tweaking cost" required to move the instance into the target class. Random forest model and sample explainer (RFEX) [62] returns numerical explanations, formatted as tables, of the predictions made by random forests in binary classification problems. The table contains the features of the dataset ranked according to their predictive power, measured by their mean decrease in accuracy, cumulative F 1 score and Cohen distance.…”
Section: Ensemblesmentioning
confidence: 99%