In this paper we present a model for action preparation and decision making in cooperative tasks that is inspired by recent experimental findings about the neuro-cognitive mechanisms supporting joint action in humans. It implements the coordination of actions and goals among the partners as a dynamic process that integrates contextual cues, shared task knowledge and predicted outcome of others' motor behavior. The control architecture is formalized by a system of coupled dynamic neural fields representing a distributed network of local but connected neural populations. Different pools of neurons encode task-relevant information about action means, task goals and context in the form of self-sustained activation patterns. These patterns are triggered by input from connected populations and evolve continuously in time under the influence of recurrent interactions. The dynamic model of joint action is evaluated in a task in which a robot and a human jointly construct a toy object. We show that the highly context sensitive mapping from action observation onto appropriate complementary actions allows coping with dynamically changing joint action situations.
How do humans coordinate their intentions, goals and motor behaviors when performing joint action tasks? Recent experimental evidence suggests that resonance processes in the observer's motor system are crucially involved in our ability to understand actions of others’, to infer their goals and even to comprehend their action-related language. In this paper, we present a control architecture for human–robot collaboration that exploits this close perception-action linkage as a means to achieve more natural and efficient communication grounded in sensorimotor experiences. The architecture is formalized by a coupled system of dynamic neural fields representing a distributed network of neural populations that encode in their activation patterns goals, actions and shared task knowledge. We validate the verbal and nonverbal communication skills of the robot in a joint assembly task in which the human–robot team has to construct toy objects from their components. The experiments focus on the robot's capacity to anticipate the user's needs and to detect and communicate unexpected events that may occur during joint task execution.
In this chapter we present results of our ongoing research on efficient and fluent human-robot collaboration that is heavily inspired by recent experimental findings about the neurocognitive mechanisms supporting joint action in humans. The robot control architecture implements the joint coordination of actions and goals as a dynamic process that integrates contextual cues, shared task knowledge and the predicted outcome of the user's motor behavior. The architecture is formalized as a coupled system of dynamic neural fields representing a distributed network of local but connected neural populations with specific functionalities. We validate the approach in a task in which a robot and a human user jointly construct a toy 'vehicle'. We show that the context-dependent mapping from action
We tested in a robotics experiment a dynamic neural field model for learning a precisely timed musical sequence. Based on neuro-plausible processing mechanisms, the model implements the idea that order and relative timing of events are stored in an integrated representation whereas the onset of sequence production is controlled by a separate process. Dynamic neural fields provide a rigorous theoretical framework to analyze and implement the necessary neural computations that bridge gaps between sensation and action in order to mediate working memory, action planing, and decision making. The robot first memorizes a short musical sequence performed by a human teacher by watching color coded keys on a screen, and then tries to execute the piece of music on a keyboard from memory without any external cues. The experimental results show that the robot is able to correct in very few demonstration-execution cycles initial sequencing and timing errors.
We present a motion controller that generates collision free trajectories for autonomous Tugger vehicles operating in dynamic factory environments, where human operators may coexist. The controller is formalized as a dynamic system of path velocity and heading direction, whose vector fields change as sensory information varies. By design the parameters are tuned so that the control variables are close to an attractor of the resultant dynamics most of the time. This contributes to the overall asymptotically stability of the system and makes it robust against perturbations. We present several experiments, in a real factory environment, that highlight different innovative features of the navigation system-flexible and safe solutions for human-aware autonomous navigation in dynamic and cluttered environments. This means, besides generating online collision free trajectories between via points, the system detects the presence of humans, interact with them showing awareness of their presence, and generate adequate motor behavior. Index Terms-Tugger vehicles, flexible and safe autonomous navigation, obstacle avoidance, dynamic environments shared with human operators
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.