2022
DOI: 10.1609/aaai.v36i6.20663
|View full text |Cite
|
Sign up to set email alerts
|

Creativity of AI: Automatic Symbolic Option Discovery for Facilitating Deep Reinforcement Learning

Abstract: Despite of achieving great success in real life, Deep Reinforcement Learning (DRL) is still suffering from three critical issues, which are data efficiency, lack of the interpretability and transferability. Recent research shows that embedding symbolic knowledge into DRL is promising in addressing those challenges. Inspired by this, we introduce a novel deep reinforcement learning framework with symbolic options. This framework features a loop training procedure, which enables guiding the improvement of policy… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(5 citation statements)
references
References 19 publications
0
5
0
Order By: Relevance
“…static methods (e.g., knowledge distillation) might potentially result in higher efficiency, which we intend to investigate in the future. It would also be interesting to investigate the integration of symbolic planning model learning [41,40,39,11] into DPBERT to help improve the explainability of dynamic planning.…”
Section: Discussionmentioning
confidence: 99%
“…static methods (e.g., knowledge distillation) might potentially result in higher efficiency, which we intend to investigate in the future. It would also be interesting to investigate the integration of symbolic planning model learning [41,40,39,11] into DPBERT to help improve the explainability of dynamic planning.…”
Section: Discussionmentioning
confidence: 99%
“…It is modular approach, allowing for straightforward generalization and transfer to other complex tasks. Another study, Symbolic Options for Reinforcement Learning (SORL) [61], proposes a method for automatically discovering and learning symbolic options, which are higher-level actions with specified preconditions and postconditions, to assist deep reinforcement learning (DRL) agents in complex environments. It was successful in mitigating the problem of sparse and delayed reward along with improving efficiency.Neurosymbolic Logic Neural Network (LNN) for RL algorithm [62], supplies fast convergence and interpretability for RL policies in text-based interaction games by extracting first-order logical facts from text observation using semantic parser(ConceptnNet) and history, then trains the symbolic rules with logical functions in the neural networks.…”
Section: A Learning For Reasoning Rl Modelmentioning
confidence: 99%
“…It has also been used to learn programmatic policies that are more generalizable and flexible to different environments [71], [72], [73], [74]. Additionally, Neurosymbolic RL has been effective in reducing the symbolic space, resulting in more efficient representations of the policy, and improving the agent's performance [56], [57], [58], [59], [60], [61], [62].…”
Section: E Optimizing Parameters Of Rlmentioning
confidence: 99%
“…The policy learning happens at two levels: each option policy is learned individually on the low level and the high-level controller learns which option to select in which state. Recently, there have been several works on defining symbolic options, allowing the RL agent to use reasoning instead of learning for finding (partially-ordered) plans over the set of options (Illanes, Yan, Icarte, & McIlraith, 2020;Lee, Katz, Agravante, Liu, Klinger, Campbell, Sohrabi, & Tesauro, 2021;Jin, Ma, Jin, Zhuo, Chen, & Yu, 2022). These approaches are very similar in spirit to policy sketches and future research could even define options based on sketch rules.…”
Section: Related Workmentioning
confidence: 99%