2016
DOI: 10.1177/1059712316665451
|View full text |Cite
|
Sign up to set email alerts
|

Combining intention and emotional state inference in a dynamic neural field architecture for human-robot joint action

Abstract: We report on our approach towards creating socially intelligent robots, which is heavily inspired by recent experimental findings about the neurocognitive mechanisms underlying action and emotion understanding in humans. Our approach uses neuro-dynamics as a theoretical language to model cognition, emotional states, decision making and action. The control architecture is formalized by a coupled system of dynamic neural fields representing a distributed network of local but connected neural populations. Differe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 75 publications
0
5
0
Order By: Relevance
“…Interestingly, these neuroscientific findings are in line with behavioral studies showing that humans seem to be willing to treat robots as entities with agency (i.e., ability to plan and act), but are reluctant to perceive them as entities that can experience internal states (i.e., ability to sense and feel; Gray et al, 2007 ). In consequence, research in social robotics would benefit from identifying conditions under which artificial agents engage mechanisms of higher-order social cognition in the human brain, which may necessitate some effort to specifically design robots as intentional and empathetic agents ( Gonsior et al, 2012 ; Silva et al, 2016 ).…”
Section: Observing Intentional Agents Activates Social Brain Areasmentioning
confidence: 99%
“…Interestingly, these neuroscientific findings are in line with behavioral studies showing that humans seem to be willing to treat robots as entities with agency (i.e., ability to plan and act), but are reluctant to perceive them as entities that can experience internal states (i.e., ability to sense and feel; Gray et al, 2007 ). In consequence, research in social robotics would benefit from identifying conditions under which artificial agents engage mechanisms of higher-order social cognition in the human brain, which may necessitate some effort to specifically design robots as intentional and empathetic agents ( Gonsior et al, 2012 ; Silva et al, 2016 ).…”
Section: Observing Intentional Agents Activates Social Brain Areasmentioning
confidence: 99%
“…Nevertheless, where tactile interaction might be particularly relevant in human-robot interaction is in the domain of Joint Action (cf. [47][48][49]) where interaction on a goal-directed task may benefit from human actors conveying both negative (rejection-based) and positive (e.g., gratitude) feedback. Again, in reference to [45], communicating emotions such as anger/frustration may be critically important to informing interactors as to how the task is perceived to be going and how to respond accordingly (e.g., approach the task in a different way, or try harder).…”
mentioning
confidence: 99%
“…It implements a highly context-sensitive mapping of an observed action of the co-worker onto an adequate complementary robot behavior. This mapping takes into account different task-related and user-related factors including an error monitoring capacity [6,8,54]. Neural populations encode in their suprathreshold activity a mismatch between which assembly step the operator should execute (shared task knowledge) and the predicted assembly step inferred from the observed motor behavior.…”
Section: Discussionmentioning
confidence: 99%