Proceedings of the Genetic and Evolutionary Computation Conference Companion 2022
DOI: 10.1145/3520304.3528897
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable pipelines with evolutionary optimized modules for reinforcement learning tasks with visual inputs

Abstract: The importance of explainability in AI has become a pressing concern, for which several explainable AI (XAI) approaches have been recently proposed. However, most of the available XAI techniques are post-hoc methods, which however may be only partially reliable, as they do not reflect exactly the state of the original models. Thus, a more direct way for achieving XAI is through interpretable (also called glass-box) models. These models have been shown to obtain comparable (and, in some cases, better) performan… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 11 publications
(2 citation statements)
references
References 21 publications
0
2
0
Order By: Relevance
“…Similar attempts at combining RL and EC have tried to obtain interpretable policies for RL tasks by combining decision trees induced by GP or Grammatical Evolution with RL acting on the leaves while the policy interacts with the environment [14][15][16]29]. Some other works in this area have explicitly focused on addressing the interpretability question in white-box models.…”
Section: Ec For Xaimentioning
confidence: 99%
“…Similar attempts at combining RL and EC have tried to obtain interpretable policies for RL tasks by combining decision trees induced by GP or Grammatical Evolution with RL acting on the leaves while the policy interacts with the environment [14][15][16]29]. Some other works in this area have explicitly focused on addressing the interpretability question in white-box models.…”
Section: Ec For Xaimentioning
confidence: 99%
“…In [5], the authors use genetic programming [9] and CMA-ES [7] to evolve interpretable DTs that are able to work in RL settings with images as input. However, the experimental results show that the proposed approach exhibits good performance only in scenarios that are not affected by noise.…”
Section: Interpretable Aimentioning
confidence: 99%