2023
DOI: 10.1007/s12559-023-10179-8
|View full text |Cite
|
Sign up to set email alerts
|

Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence

Vikas Hassija,
Vinay Chamola,
Atmesh Mahapatra
et al.

Abstract: Recent years have seen a tremendous growth in Artificial Intelligence (AI)-based methodological development in a broad range of domains. In this rapidly evolving field, large number of methods are being reported using machine learning (ML) and Deep Learning (DL) models. Majority of these models are inherently complex and lacks explanations of the decision making process causing these models to be termed as 'Black-Box'. One of the major bottlenecks to adopt such models in mission-critical application domains, s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
43
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 233 publications
(43 citation statements)
references
References 154 publications
0
43
0
Order By: Relevance
“…This work contributes to the state of the art on readiness assessment models (e.g., Kim et al, 2022;Mioch, 2017;Mariajoseph et al, 2020;Baek et al 2018;Deo & Trivedi, 2020;Du et al 2021), by proposing an applicable tool to estimate driver readiness, without relying on subjective assessment of readiness, as a ground truth. Another advantage of the proposed solution is the mechanistic nature of the model, since it allows researchers to directly observe the relationship between driver readiness estimation variables and the behaviour prediction, without relying on black box techniques (Hassija et al, 2024). The last contribution of this work is that the proposed methodology accounts for scenario variability, assuming that different safety-critical scenarios might require different readiness threshold values.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…This work contributes to the state of the art on readiness assessment models (e.g., Kim et al, 2022;Mioch, 2017;Mariajoseph et al, 2020;Baek et al 2018;Deo & Trivedi, 2020;Du et al 2021), by proposing an applicable tool to estimate driver readiness, without relying on subjective assessment of readiness, as a ground truth. Another advantage of the proposed solution is the mechanistic nature of the model, since it allows researchers to directly observe the relationship between driver readiness estimation variables and the behaviour prediction, without relying on black box techniques (Hassija et al, 2024). The last contribution of this work is that the proposed methodology accounts for scenario variability, assuming that different safety-critical scenarios might require different readiness threshold values.…”
Section: Discussionmentioning
confidence: 99%
“…However, the parameter used as the ground truth on the training dataset of this study was a manual annotation of video data, where experts described drivers' state as "good" or "bad", based on their subjective interpretation (Deo & Trivedi, 2020;Du et al 2021). Although machine learning models can provide accurate predictors for the drivers' annotated state, according to Hassija et al (2024), as the artificial intelligence-based (AI) models become more complex, the relationship between the model's predictors and the predicted variable becomes too complex to be explained. Therefore, machine learning approaches provide limited value as a tool to theoretically define readiness, and the safety thresholds required by DSM systems.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, people don't (fully) trust the recommendations of a CNN, as they are considered 'black-boxes' [13]. 'Black-box' means that the CNNs hide the recommendation generation methodology from the end users, resulting in the need of eXplainable artificial intelligence (XAI) [14]. XAI aims at making the recommendations of a DL algorithm understandable to an end user, so that he/ she can trust the system.…”
Section: Introductionmentioning
confidence: 99%
“…XAI aims at making the recommendations of a DL algorithm understandable to an end user, so that he/ she can trust the system. The use of XAI has been greatly advocated in the healthcare [14].…”
Section: Introductionmentioning
confidence: 99%