2019
DOI: 10.3389/fnbot.2019.00045
|View full text |Cite
|
Sign up to set email alerts
|

An Embodied Agent Learning Affordances With Intrinsic Motivations and Solving Extrinsic Tasks With Attention and One-Step Planning

Abstract: We propose an architecture for the open-ended learning and control of embodied agents. The architecture learns action affordances and forward models based on intrinsic motivations and can later use the acquired knowledge to solve extrinsic tasks by decomposing them into sub-tasks, each solved with one-step planning. An affordance is here operationalized as the agent's estimate of the probability of success of an action performed on a given object. The focus of the work is on the overall architecture while sing… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
14
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
2
1
1

Relationship

5
2

Authors

Journals

citations
Cited by 13 publications
(14 citation statements)
references
References 61 publications
0
14
0
Order By: Relevance
“…The remaining components, computationally simplified, are relevant to support the interaction of the agent with the simulated card environment (see 62 for the simple simulator used to this purpose). The visual sensor is formed by an RGBY matrix that encodes a small portion of the environment, approximately covering one card per time, and represents the input following an attention-focused saccade onto one specific card.…”
Section: Methodsmentioning
confidence: 99%
“…The remaining components, computationally simplified, are relevant to support the interaction of the agent with the simulated card environment (see 62 for the simple simulator used to this purpose). The visual sensor is formed by an RGBY matrix that encodes a small portion of the environment, approximately covering one card per time, and represents the input following an attention-focused saccade onto one specific card.…”
Section: Methodsmentioning
confidence: 99%
“…The concept of Intrinsic Motivations (IMs) is borrowed from biological [9] and psychological literature [10] describing how novel or unexpected "neutral" stimuli, as well as the perception of control over the environment, can generate learning processes even in the absence of assigned rewards or tasks. In the computational literature, IMs have been implemented in artificial agents to foster their autonomy in gathering knowledge [11], [12], learning repertoire of skills [13], [14], [15], [16], exploiting affordances from the environment [17], [18], [19], selecting their own tasks [20], [21], [22], and even boosting imitation learning techniques [23].…”
Section: Introductionmentioning
confidence: 99%
“…One advantage, employed here, is that the world model can directly select actions to perform; instead, previous models [38,39] need an additional mechanism selecting actions on the basis of the state sequence produced by the world model. A second advantage is that for each environment state the world model can suggest the selection of actions that have a potential relevance in that context, rather than any action (this captures the popular idea of affordance in cognitive science [65,66]). A last advantage could be the easier learning (and understanding) of stateaction sequences directed to a goal produced by other agents; indeed, the world model would be neutral with respect to the fact that actions are performed by another part of the brain or by another agent.…”
Section: Discussion Of the General Features Of The Modelmentioning
confidence: 99%