Human–human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human–human cooperation experiment demonstrating that an agent’s vision of her/his partner’s gaze can significantly improve that agent’s performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human–robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human–robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.
A sensorimotor sequence may contain information structure at several different levels. In this study, we investigated the hypothesis that two dissociable processes are required for the learning of surface structure and abstract structure, respectively, of sensorimotor sequences. Surface structure is the simple serial order of the sequence elements, whereas abstract structure is de ned by relationships between repeating sequence elements. Thus, sequences ABCBAC and DEFEDF have different surface structures but share a common abstract structure, 123213, and are therefore isomorphic. Our simulations of sequence learning performance in serial reaction time (SRT) tasks demonstrated that (1) an existing model of the primate fronto-striatal system is capable of learning surface structure but fails to learn abstract structure, which requires an additional capability, (2) surface and abstract structure can be learned independently by these independent processes, and (3) only abstract structure transfers to isomorphic sequences.We tested these predictions in human subjects. For a sequence with predictable surface and abstract structure, subjects in either explicit or implicit conditions learn the surface structure, but only explicit subjects learn and transfer the abstract structure. For sequences with only abstract structure, learning and transfer of this structure occurs only in the explicit group. These results are parallel to those from the simulations and support our dissociable process hypothesis. Based on the synthesis of the current simulation and empirical results with our previous neuropsychological ndings, we propose a neurophysiological basis for these dissociable processes: Surface structure can be learned by processes that operate under implicit conditions and rely on the fronto-striatal system, whereas learning abstract structure requires a more explicit activation of dissociable processes that rely on a distributed network that includes the left anterior cortex.
A number of behavioral and neuroimaging studies have reported converging data in favor of a cortical network for vestibular function, distributed between the temporo-parietal cortex and the prefrontal cortex in the primate. In this review, we focus on the role of the cerebral cortex in visuo-vestibular integration including the motion sensitive temporo-occipital areas i.e., the middle superior temporal area (MST) and the parietal cortex. Indeed, these two neighboring cortical regions, though they both receive combined vestibular and visual information, have distinct implications in vestibular function. In sum, this review of the literature leads to the idea of two separate cortical vestibular sub-systems forming (1) a velocity pathway including MST and direct descending pathways on vestibular nuclei. As it receives well-defined visual and vestibular velocity signals, this pathway is likely involved in heading perception and rapid top-down regulation of eye/head coordination and (2) an inertial processing pathway involving the parietal cortex in connection with the subcortical vestibular nuclei complex responsible for velocity storage integration. This vestibular cortical pathway would be implicated in high-order multimodal integration and cognitive functions, including world space and self-referential processing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.