Dynamic interactions with caregivers are essential for infants to develop cognitive abilities, including aspects of action, perception, and attention. We hypothesized that these abilities can be acquired through the predictive learning of sensory inputs including their uncertainty (inverse precision) in terms of variance. To examine our hypothesis from the perspective of cognitive developmental robotics, we conducted a neurorobotics experiment involving a ball-playing interaction task between a human experimenter representing a caregiver and a small humanoid robot representing an infant. The robot was equipped with a dynamic generative model called a stochastic continuous-time recurrent neural network (S-CTRNN). The S-CTRNN learned to generate predictions about both the visuoproprioceptive states of the robot and the uncertainty of these states by minimizing a negative log-likelihood consisting of loguncertainty and precision-weighted prediction error. The experimental results showed that predictive learning with uncertainty estimation enabled the robot to acquire infant-like cognitive abilities through dynamic interactions with the experimenter. We also discuss the effects of infant-directed modifications observed in caregiver-infant interactions on the development of these abilities.
We propose an imitative learning model that allows a robot to acquire positional relations between the demonstrator and the robot, and to transform observed actions into robotic actions. Providing robots with imitative capabilities allows us to teach novel actions to them without resorting to trial-and-error approaches. Existing methods for imitative robotic learning require mathematical formulations or conversion modules to translate positional relations between demonstrators and robots. The proposed model uses two neural networks, a convolutional autoencoder (CAE) and a multiple timescale recurrent neural network (MTRNN). The CAE is trained to extract visual features from raw images captured by a camera. The MTRNN is trained to integrate sensory-motor information and to predict next states. We implement this model on a robot and conducted sequence to sequence learning that allows the robot to transform demonstrator actions into robot actions. Through training of the proposed model, representations of actions, manipulated objects, and positional relations are formed in the hierarchical structure of the MTRNN. After training, we confirm capability for generating unlearned imitative patterns.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.