2021
DOI: 10.3389/fpsyg.2021.716671
|View full text |Cite
|
Sign up to set email alerts
|

The Embodied Crossmodal Self Forms Language and Interaction: A Computational Cognitive Review

Abstract: Human language is inherently embodied and grounded in sensorimotor representations of the self and the world around it. This suggests that the body schema and ideomotor action-effect associations play an important role in language understanding, language generation, and verbal/physical interaction with others. There are computational models that focus purely on non-verbal interaction between humans and robots, and there are computational models for dialog systems that focus only on verbal interaction. However,… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 77 publications
0
2
0
Order By: Relevance
“…Empirical studies using EASE revealed a significant coincidence of self-disorder with typical formal thought aberrancies and imagination anomalies ( 34 ), which suggests the crucial role of the self in language processing. An embodied approach to language defines concepts as having the same structure as perceptions, such as the color, shape, movement, and emotional value ( 35 ). In the study, we have used spatial properties.…”
Section: Discussionmentioning
confidence: 99%
“…Empirical studies using EASE revealed a significant coincidence of self-disorder with typical formal thought aberrancies and imagination anomalies ( 34 ), which suggests the crucial role of the self in language processing. An embodied approach to language defines concepts as having the same structure as perceptions, such as the color, shape, movement, and emotional value ( 35 ). In the study, we have used spatial properties.…”
Section: Discussionmentioning
confidence: 99%
“…While RL is established for learning physical actions and suitable for general use (Silver et al, 2021 ), its use for language learning is yet emergent (Röder et al, 2021 ; Uc-Cetina et al, 2021 ). Regarding language as a sequence production problem, our language decoder could benefit from the availability of high-quality forward models, such as the Transformer language model.…”
Section: Discussionmentioning
confidence: 99%