2021
DOI: 10.3389/fnbot.2021.626380
|View full text |Cite
|
Sign up to set email alerts
|

Learning Actions From Natural Language Instructions Using an ON-World Embodied Cognitive Architecture

Abstract: Endowing robots with the ability to view the world the way humans do, to understand natural language and to learn novel semantic meanings when they are deployed in the physical world, is a compelling problem. Another significant aspect is linking language to action, in particular, utterances involving abstract words, in artificial agents. In this work, we propose a novel methodology, using a brain-inspired architecture, to model an appropriate mapping of language with the percept and internal motor representat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 64 publications
0
2
0
Order By: Relevance
“…For instance, WM implementation could lead to autonomous systems with cognitive capabilities closer to the human ones, enabling the possibility of learning through interactions between humans, or learning from few examples integrating information from different sensory inputs in a similar way that humans do. Recent work has shown that the use of a WM component in robotic models can be useful to emulate many human-like cognitive functions, ranging from episodic memory, imagination and planning (Balkenius et al, 2018 ), language development (Giorgi et al, 2021b ), and language grounding into actions and perceptions in embodied cognitive architectures (Giorgi et al, 2021a ).…”
Section: Discussionmentioning
confidence: 99%
“…For instance, WM implementation could lead to autonomous systems with cognitive capabilities closer to the human ones, enabling the possibility of learning through interactions between humans, or learning from few examples integrating information from different sensory inputs in a similar way that humans do. Recent work has shown that the use of a WM component in robotic models can be useful to emulate many human-like cognitive functions, ranging from episodic memory, imagination and planning (Balkenius et al, 2018 ), language development (Giorgi et al, 2021b ), and language grounding into actions and perceptions in embodied cognitive architectures (Giorgi et al, 2021a ).…”
Section: Discussionmentioning
confidence: 99%
“…Computationally, then, several approaches to visual information processing focus on the manner in which an action is performed (how) [18]- [21], versus the result of the action, whether an object moves (where) or changes in appearance [22], [23].…”
Section: Introductionmentioning
confidence: 99%