2021
DOI: 10.1016/j.dss.2021.113561
|View full text |Cite
|
Sign up to set email alerts
|

LINDA-BN: An interpretable probabilistic approach for demystifying black-box predictive models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
2

Relationship

2
6

Authors

Journals

citations
Cited by 57 publications
(15 citation statements)
references
References 11 publications
0
15
0
Order By: Relevance
“…The decision surface of the model becomes smoother as the input space is restricted. Local interpretability is often achieved through the use of local example-based techniques or local surrogates, which simulate a limited region surrounding an example [24,25,23,26].…”
Section: Model-agnostic Counterfactual Algorithmsmentioning
confidence: 99%
“…The decision surface of the model becomes smoother as the input space is restricted. Local interpretability is often achieved through the use of local example-based techniques or local surrogates, which simulate a limited region surrounding an example [24,25,23,26].…”
Section: Model-agnostic Counterfactual Algorithmsmentioning
confidence: 99%
“…On the other hand, GBM is computationally effective and often outperforms other algorithms [53,54]. The three are also regarded interpretable, i.e., essential for obtaining novel insights (e.g., new causal links between explanatory and response variables) and troubleshooting the models (i.e., detecting and diagnosing biases in the input data and trained models) for improved satellite-based value-added products [55] as opposed to kernel-based and deep learning algorithms such as Kernel Ridge Regression (KRR), Support Vector Machines (SVM), and Neural Networks (NN), which are "black-boxes", complex, and computationally expensive [56]. Therefore, the comparison of these three MLRAs in the context of crop biophysical parameter estimation is worthwhile to elucidate their capabilities under the same environmental and acquisition conditions and consistency to previous performances.…”
Section: Machine Learning Regression Algorithmsmentioning
confidence: 99%
“…The aim of the second phase is to develop a method to test the fidelity of explanations for a black box, using a white box model as a substitute to determine the appropriateness of the method and parameters used in the method. Specifically, the approach we proposed in a previous work will be refined and extended to better evaluate the fidelity of black box models (Velmurugan et al, 2021). This approach is adapted from an ablation-based method used to test the internal fidelity of tabular and text data, which tests the correctness of the explanation.…”
Section: Phase 2: White Box As Proxy For Black Boxmentioning
confidence: 99%
“…While this approach of "removal" of features is relatively simple in image or text data, this approach will not hold for tabular data, were "gaps" in the input are automatically imputed by the predictive model, or are otherwise treated as some improbable value, such as infinity. Thus, in a previous work (Velmurugan et al, 2021), rather than removing features, we attempted to alter them through perturbation to reflect a value outside of the range of values considered to be relevant by the explanation.…”
Section: Phase 2: White Box As Proxy For Black Boxmentioning
confidence: 99%
See 1 more Smart Citation