2023
DOI: 10.3390/s23042013
|View full text |Cite
|
Sign up to set email alerts
|

A Bayesian Network Approach to Explainable Reinforcement Learning with Distal Information

Abstract: Nowadays, Artificial Intelligence systems have expanded their competence field from research to industry and daily life, so understanding how they make decisions is becoming fundamental to reducing the lack of trust between users and machines and increasing the transparency of the model. This paper aims to automate the generation of explanations for model-free Reinforcement Learning algorithms by answering “why” and “why not” questions. To this end, we use Bayesian Networks in combination with the NOTEARS algo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 43 publications
0
1
0
Order By: Relevance
“…This interpretability information can be labeled as local or global, where local explanations focus on interpreting the predictions of a single action at a point in time and global explanations that give a holistic view of the policy's behavior overall. 8 Our work focuses on the global interpretability of a DRL model as we aim to analyze the overarching policy to identify potential critical points that may affect a policy's success.…”
Section: Explainable Reinforcement Learningmentioning
confidence: 99%
“…This interpretability information can be labeled as local or global, where local explanations focus on interpreting the predictions of a single action at a point in time and global explanations that give a holistic view of the policy's behavior overall. 8 Our work focuses on the global interpretability of a DRL model as we aim to analyze the overarching policy to identify potential critical points that may affect a policy's success.…”
Section: Explainable Reinforcement Learningmentioning
confidence: 99%