We describe work in progress with the aim of constructing a computational model of emotional learning and processing inspired by neurophysiological ¢ndings. The main brain areas modeled are the amygdala and the orbitofrontal cortex and the interaction between them. We want to show that (1) there exists enough physiological data to suggest the overall architecture of a computational model, (2) emotion plays a clear role in learning the behavior. We review neurophysiological data and present a computational model that is subsequently tested in simulation.In Mowrer's in£uential two-process theory of learning, the acquisition of a learned response was considered to proceed in two steps (Mowrer, 1960(Mowrer, /1973. In the ¢rst step, the stimulus is associated with its emotional consequences. In the second step, this emotional evaluation shapes an association between the stimulus and the response. Mowrer made an important contribution to learning theory when he acknowledged that emotion plays an important role in learning. Another important aspect of the theory is that it suggests a role for emotions that can easily be implemented as a computational model. Different versions of the two-process theory have been implemented as computational models, for example,
Interaction between humans involves a plethora of sensory information, both in the form of explicit communication as well as more subtle unconsciously perceived signals. In order to enable natural human-robot interaction, robots will have to acquire the skills to detect and meaningfully integrate information from multiple modalities. In this article, we focus on sound localization in the context of a multi-sensory humanoid robot that combines audio and video information to yield natural and intuitive responses to human behavior, such as directed eye-head movements towards natural stimuli. We highlight four common sound source localization algorithms and compare their performance and advantages for real-time interaction. We also briefly introduce an integrated distributed control framework called DVC, where additional modalities such as speech recognition, visual tracking, or object recognition can easily be integrated. We further describe the way the sound localization module has been integrated in our humanoid robot, CB.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.