Roughly half of all vestibular nucleus neurons without eye movement sensitivity respond to both angular rotation and linear acceleration. Linear acceleration signals arise from otolith organs, and rotation signals arise from semicircular canals. In the vestibular nerve, these signals are carried by different afferents. Vestibular nucleus neurons represent the first point of convergence for these distinct sensory signals. This study systematically evaluated how rotational and translational signals interact in single neurons in the vestibular nuclei: multisensory integration at the first opportunity for convergence between these two independent vestibular sensory signals. Single-unit recordings were made from the vestibular nuclei of awake macaques during yaw rotation, translation in the horizontal plane, and combinations of rotation and translation at different frequencies. The overall response magnitude of the combined translation and rotation was generally less than the sum of the magnitudes in responses to the stimuli applied independently. However, we found that under conditions in which the peaks of the rotational and translational responses were coincident these signals were approximately additive. With presentation of rotation and translation at different frequencies, rotation was attenuated more than translation, regardless of which was at a higher frequency. These data suggest a nonlinear interaction between these two sensory modalities in the vestibular nuclei, in which coincident peak responses are proportionally stronger than other, off-peak interactions. These results are similar to those reported for other forms of multisensory integration, such as audio-visual integration in the superior colliculus. NEW & NOTEWORTHY This is the first study to systematically explore the interaction of rotational and translational signals in the vestibular nuclei through independent manipulation. The results of this study demonstrate nonlinear integration leading to maximum response amplitude when the timing and direction of peak rotational and translational responses are coincident.
Variable impedance control in operation-space is a promising approach to learning contact-rich manipulation behaviors. One of the main challenges with this approach is producing a manipulation behavior that ensures the safety of the arm and the environment. Such behavior is typically implemented via a reward function that penalizes unsafe actions (e.g. obstacle collision, joint limit extension), but that approach is not always effective and does not result in behaviors that can be reused in slightly different environments. We show how to combine Riemannian Motion Policies, a class of policies that dynamically generate motion in the presence of safety and collision constraints, with variable impedance operation-space control to learn safer contact-rich manipulation behaviors.
Learning a robot motor skill from scratch is impractically slow; so much so that in practice, learning must be bootstrapped using a good skill policy obtained from human demonstration. However, relying on human demonstration necessarily degrades the autonomy of robots that must learn a wide variety of skills over their operational lifetimes. We propose using kinematic motion planning as a completely autonomous, sample efficient way to bootstrap motor skill learning for object manipulation. We demonstrate the use of motion planners to bootstrap motor skills in two complex object manipulation scenarios with different policy representations: opening a drawer with a dynamic movement primitive representation, and closing a microwave door with a deep neural network policy. We also show how our method can bootstrap a motor skill for the challenging dynamic task of learning to hit a ball off a tee, where a kinematic plan based on treating the scene as static is insufficient to solve the task, but sufficient to bootstrap a more dynamic policy. In all three cases, our method is competitive with human-demonstrated initialization, and significantly outperforms starting with a random policy. This approach enables robots to to efficiently and autonomously learn motor policies for dynamic tasks without human demonstration.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.