2018
DOI: 10.1007/s12559-018-9553-1
|View full text |Cite
|
Sign up to set email alerts
|

Goal-Directed Reasoning and Cooperation in Robots in Shared Workspaces: an Internal Simulation Based Neural Framework

Abstract: From social dining in households to product assembly in manufacturing lines, goal-directed reasoning and cooperation with other agents in shared workspaces is a ubiquitous aspect of our day-to-day activities. Critical for such behaviours is the ability to spontaneously anticipate what is doable by oneself as well as the interacting partner based on the evolving environmental context and thereby exploit such information to engage in goal-oriented action sequences. In the setting of an industrial task where two … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 77 publications
0
4
0
Order By: Relevance
“…In wayfinding research, visual cues in the built environment are a human's primary source of distal information [60]. Indeed, several researchers have developed vision-based techniques for enabling mobile robots and virtual agents to detect and react to information in their environment [61]. In order to realistically model the interaction between agents and their environment, a human-like visual perception model should focus on the first-person perception of signage while considering dynamic occlusions [62].…”
Section: Visual Perception Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…In wayfinding research, visual cues in the built environment are a human's primary source of distal information [60]. Indeed, several researchers have developed vision-based techniques for enabling mobile robots and virtual agents to detect and react to information in their environment [61]. In order to realistically model the interaction between agents and their environment, a human-like visual perception model should focus on the first-person perception of signage while considering dynamic occlusions [62].…”
Section: Visual Perception Modelmentioning
confidence: 99%
“…The most important features of a biological, cognitive system are the abilities to perceive, act, and learn. Many researchers have built computational models of intelligent cognitive systems [11][12][13] that can cooperate with other agents in a shared workspace towards a common goal [14], perform decision-making during path-planning and avoid obstacles with flying drones [15], and perceive information from an unfamiliar dynamic environment [12]. These abilities can also be characterized in terms of Shannon information theory [16,17].…”
Section: Introductionmentioning
confidence: 99%
“…To this end, a Human-Aware Task Planner has been used to define the sequence of actions to perform and to decide whether and when the robot should intervene. Bhat et al [7] present a neural architecture for goal-direct reasoning and cooperation between multiple robots in an industrial task, where two robots work together for assembling objects in a shared workspace.…”
Section: Cognitive Robotic Systemsmentioning
confidence: 99%
“…The main reason behind a different behaviour is that now the robot, in contrast to the previous case, made the correct moves in the first steps (step 1, 2). It can be noted that even though it considered very little the previous action outcome, in that specific case, because the first level of engagement resulted in a successful strategy for the first two moves of the user, the robot adopted it as a preferred action even in the next steps (4,7,8). The main reason is straightforward if we look at Equation 3.…”
Section: Evaluation Of α and γmentioning
confidence: 99%