2021
DOI: 10.1088/1742-6596/1757/1/012075
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable Saliency Map for Deep Reinforcement Learning

Abstract: Deep reinforcement learning (deep RL) achieved big successes with the advantage of deep learning techniques, while it also introduces the disadvantage of the model interpretability. Bad interpretability is a great obstacle for deep RL to be applied in real situations or human-machine interaction situations. Borrowed from the deep learning field, the techniques of saliency maps recently become popular to improve the interpretability of deep RL. However, the saliency maps still cannot provide specific and clear … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 3 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?