2019
DOI: 10.1609/aaai.v33i01.330110007
|View full text |Cite
|
Sign up to set email alerts
|

Strategic Tasks for Explainable Reinforcement Learning

Abstract: Commonly used sequential decision making tasks such as the games in the Arcade Learning Environment (ALE) provide rich observation spaces suitable for deep reinforcement learning. However, they consist mostly of low-level control tasks which are of limited use for the development of explainable artificial intelligence(XAI) due to the fine temporal resolution of the tasks. Many of these domains also lack built-in high level abstractions and symbols. Existing tasks that provide for both strategic decision-making… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 3 publications
0
5
0
Order By: Relevance
“…The most common approach to providing these explanations is to develop a model of the agent's behaviour using a separate observer that learns the agent's behaviour. There have been several generic explanation facilities that can perform this task, such as Pocius et al [177], which extends Local Interpretable Model-Agnostic Explanations (LIME) [183] and can provide contrastive explanations of any type of agents' behaviour -not solely an RL agent. These generic explanation facilities can predict behaviour, but do not explain the agent's internal reasoning for its behaviour.…”
Section: Results Of Xrl-behaviourmentioning
confidence: 99%
“…The most common approach to providing these explanations is to develop a model of the agent's behaviour using a separate observer that learns the agent's behaviour. There have been several generic explanation facilities that can perform this task, such as Pocius et al [177], which extends Local Interpretable Model-Agnostic Explanations (LIME) [183] and can provide contrastive explanations of any type of agents' behaviour -not solely an RL agent. These generic explanation facilities can predict behaviour, but do not explain the agent's internal reasoning for its behaviour.…”
Section: Results Of Xrl-behaviourmentioning
confidence: 99%
“…The reward decomposition approach has been proposed as the basis for a set of decision-making tasks in [4], in environments that naturally provide multiple reward signals: The goal is to enhance explainability by providing high-level abstractions for sequential tasks. [23] aim to enable an autonomous agent to reason over and answer questions about its underlying control logic 𝐿, independent of its internal representation.…”
Section: Distal Explanations For Explainable Reinforcement Learning P...mentioning
confidence: 99%
“…Moreover, Wang et al [47] proposed an explainable recommendation system using an RL framework. Pocius et al [31] utilized saliency maps as a way to explain agent decisions in a partially-observable game scenario. Thus, they focus on providing visual explanation with Deep RL.…”
Section: Environmentmentioning
confidence: 99%
“…As aforementioned, although there is increasing literature in different XAI subfields, such as explainable planners, interpretable RL, or explainable agency, just a few works are addressing the XRL challenge in robotic scenarios. In some of those works, although they are in a certain way focused on XRL, they have different aims than ours, e.g., to explain the learning process using saliency maps from a computational vision perspective, especially when using deep reinforcement learning as in [31]. In this paper, we focus on explaining goal-oriented decisions to provide an understanding to the user of what motivates the robot's specific actions from different states, taking into account the problem domain.…”
Section: Outcome-focused Explanationsmentioning
confidence: 99%
See 1 more Smart Citation