2020
DOI: 10.48550/arxiv.2005.06223
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DREAM Architecture: a Developmental Approach to Open-Ended Learning in Robotics

Abstract: Robots are still limited to controlled conditions, that the robot designer knows with enough details to endow the robot with the appropriate models or behaviors. Learning algorithms add some flexibility with the ability to discover the appropriate behavior given either some demonstrations or a reward to guide its exploration with a reinforcement learning algorithm. Reinforcement learning algorithms rely on the definition of state and action spaces that define reachable behaviors. Their adaptation capability cr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 96 publications
(133 reference statements)
0
5
0
Order By: Relevance
“…Indeed, as designers of the system, we chose a representation (discretization of the output of a SLAM algorithm) adapted to the problem at hand (a navigation problem). However the context of this proposal is to build on the representation redescription framework [Doncieux et al, 2018[Doncieux et al, , 2020 to ultimately design systems that autonomously determine the representations adapted to the task. The modularity of the present architecture also enables to extend it to the continuous case by replacing tabular value functions with neural network implementations.…”
Section: Discussionmentioning
confidence: 99%
“…Indeed, as designers of the system, we chose a representation (discretization of the output of a SLAM algorithm) adapted to the problem at hand (a navigation problem). However the context of this proposal is to build on the representation redescription framework [Doncieux et al, 2018[Doncieux et al, , 2020 to ultimately design systems that autonomously determine the representations adapted to the task. The modularity of the present architecture also enables to extend it to the continuous case by replacing tabular value functions with neural network implementations.…”
Section: Discussionmentioning
confidence: 99%
“…Equally important is to evaluate and explain other aspects in reinforcement learning, e.g. formally explaining the role of curriculum learning [82], quality diversity or other human-learning inspired aspects of open-ended learning [28,77,83]. Thus, more theoretic bases to serve explainable by design DRL are required.…”
Section: Discussionmentioning
confidence: 99%
“…This enables to capture the variations in the environment influenced by the agent's actions and thus, extrapolate explanations. SRL can be especially useful in RL for robotics and control [85,84,106,28,29], and can help to understand how the agent interprets the observations and what is relevant to learn to act, i.e., actionable or controllable features [62]. Indeed, the dimensionality reduction induced by SRL, coupled with the link to the control and possible disentanglement of variation factors, could be highly beneficial to improve our understanding capacity of the decisions made by RL algorithms using a state representation method [63].…”
Section: Explanation Through Representation Learningmentioning
confidence: 99%
“…Agents in ambitious open-ended learning settings, where both the agent and the designer do not know the task or the domain ahead of time, could also experience active sources of non-stationarity. For example, these settings may experience a changing state and action space over the course of an agent's lifetime while continuously generating creative behaviors (Doncieux et al, 2020). In active non-stationary environments, we can now assume that environment dynamics may vary in a Markovian way following the function p(z |s, a, z) as in (Ong et al, 2010).…”
Section: Active Non-stationaritymentioning
confidence: 99%