2021
DOI: 10.3390/jmse9111178
|View full text |Cite
|
Sign up to set email alerts
|

Explaining a Deep Reinforcement Learning Docking Agent Using Linear Model Trees with User Adapted Visualization

Abstract: Deep neural networks (DNNs) can be useful within the marine robotics field, but their utility value is restricted by their black-box nature. Explainable artificial intelligence methods attempt to understand how such black-boxes make their decisions. In this work, linear model trees (LMTs) are used to approximate the DNN controlling an autonomous surface vessel (ASV) in a simulated environment and then run in parallel with the DNN to give explanations in the form of feature attributions in real-time. How well a… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 30 publications
0
4
0
Order By: Relevance
“…Gjaerum et al [14] propose to use decision trees with linear functions in the leaf nodes (linear model trees [87]), showing that they can be valid for generating counterfactual explanations in relatively competitive scenarios with multiple, continuous outputs. Specifically, the challenge is solved using H-LMTs (Heuristic Linear Model Trees) [88]. An H-LMT does not take maximum depth as a restriction but rather the maximum number of leaf nodes allowed.…”
Section: Exploring the Recent Past: Why Did You Do This Instead Of That?mentioning
confidence: 99%
“…Gjaerum et al [14] propose to use decision trees with linear functions in the leaf nodes (linear model trees [87]), showing that they can be valid for generating counterfactual explanations in relatively competitive scenarios with multiple, continuous outputs. Specifically, the challenge is solved using H-LMTs (Heuristic Linear Model Trees) [88]. An H-LMT does not take maximum depth as a restriction but rather the maximum number of leaf nodes allowed.…”
Section: Exploring the Recent Past: Why Did You Do This Instead Of That?mentioning
confidence: 99%
“…A problematic feature of ANNs is their black-box nature, making it difficult to understand the decision-making process. However, there are some attempts to use methods from explainable AI (XAI) to make sense of the decisions made by ANNs employed for the docking problem [159], [160], [164], [209].…”
Section: Ann (Artificial Neural Network)mentioning
confidence: 99%
“…Liessner 2016) with SHAP values to create a new explanation method for controlling an aerial vehicle. Gjaerum et.al evaluated XAI methods like LIME, Anchors, IG, SHAP, and SAGE for DNN explanations, tailored for developers and seafarers/operators, focusing on their need for quick and risk-aware decision-making [ 11]. Apart from these applications of Shapley values to RL, Beechey et al (2023) present the first theoretical analysis of applying Shapley values to explain RL and show that previous uses are incorrect or incomplete [ 4].…”
Section: Introductionmentioning
confidence: 99%