2018
DOI: 10.48550/arxiv.1809.06061
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Transparency and Explanation in Deep Reinforcement Learning Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 0 publications
1
6
0
Order By: Relevance
“…Furthermore, model-agnostic explanation methods usually work by analyzing feature inputs and outputs and do not have access to the models' internal information, such as weights or structural information by definition. Shapley Additive Explanations (SHAP) tools [59], Saliency Map [60], and Gradient-weighted Class Activation Mapping (Grad-CAM) [61] are widely used model-agnostic explanation tools.…”
Section: ) Model-specific or Model-agnosticmentioning
confidence: 99%
“…Furthermore, model-agnostic explanation methods usually work by analyzing feature inputs and outputs and do not have access to the models' internal information, such as weights or structural information by definition. Shapley Additive Explanations (SHAP) tools [59], Saliency Map [60], and Gradient-weighted Class Activation Mapping (Grad-CAM) [61] are widely used model-agnostic explanation tools.…”
Section: ) Model-specific or Model-agnosticmentioning
confidence: 99%
“…LRP has been used to show what information is important to the agent towards a specific decision: The approach presented in [64] addresses the policy and the responses explanation problems by combining methods, contributing to our understanding of the role of saliency maps in the context of explaining agents' behaviour: While saliency maps have been shown to improve classification decisions in images (albeit with 60% of correctness in [68]), their role in RL is not that clear (e.g. as shown in [3]). However, they may provide significant positive effects when combined with other methods (as in [69]).…”
Section: Lime and Shapley Sampling Values Show Improvement In Terms O...mentioning
confidence: 99%
“…4.4.5 Understanding agent actions using specific and relevant feature attribution. Recognizing that perturbation-based approaches for RL, as that proposed by S. Greydanus et al in [66] and the O-DRL approach [3] in 4.4.4, tend to produce saliency maps that are not specific to the action of interest, N. Puri et al in [63] propose SARFA to generate saliency maps that focus on explaining the specific action decided, balancing between specificity and relevance.…”
Section: Transparencymentioning
confidence: 99%
See 2 more Smart Citations