AcknowledgmentsThis study was supported by grants from Japan Science and Technology Corporation under the ERATO and CREST schemes. We thank Richard Henson, Okihide Hikosaka, Hiroshi Imamizu, Mitsuo Kawato, and Hiroyuki Nakahara for helpful comments and Alex Harner for the experimental software. We thank anonymous reviewers for their suggestions for improving the manuscript. We thank Joe Gati, for the help in running the experiments and V. S. Chandrasekhar Pammi and Ahmed, UH for help with analysis.
ABSTRACTA visuomotor sequence can be learned as a series of visuo-spatial cues or as a sequence of effector movements. Earlier imaging studies have revealed that a network of brain areas is activated in the course of motor sequence learning. However these studies do not address the question of the type of representation being established at various stages of visuomotor sequence learning. In an earlier behavioral study, we demonstrated that acquisition of visuo-spatial sequence representation enables rapid learning in the early stage and progressive establishment of somato-motor representation helps speedier execution by the late stage. We conducted functional magnetic resonance imaging (fMRI) experiments wherein subjects learned and practiced the same sequence alternately in normal and rotated settings. In one rotated setting (visual), subjects learned a new motor sequence in response to an identical sequence of visual cues as in normal. In another rotated setting (motor), the display sequence was altered as compared to normal, but the same sequence of effector movements were used to perform the sequence. Comparison of different rotated settings revealed analogous transitions both in the cortical and subcortical sites during visuomotor sequence learning a transition of activity from parietal to parietalpremotor and then to premotor cortex and a concomitant shift was observed from anterior putamen to a combined activity in both anterior and posterior putamen and finally to posterior putamen. These results suggest a putative role for engagement of different cortical and subcortical networks at various stages of learning in supporting distinct sequence representations.
Monetary rewards are uniquely human. Because money is easy to quantify and present visually, it is the reward of choice for most fMRI studies, even though it cannot be handed over to participants inside the scanner. A typical fMRI study requires hundreds of trials and thus small amounts of monetary rewards per trial (e.g. 5p) if all trials are to be treated equally. However, small payoffs can have detrimental effects on performance due to their limited buying power. Hypothetical monetary rewards can overcome the limitations of smaller monetary rewards but it is less well known whether predictors of hypothetical rewards activate reward regions. In two experiments, visual stimuli were associated with hypothetical monetary rewards. In Experiment 1, we used stimuli predicting either visually presented or imagined hypothetical monetary rewards, together with non-rewarding control pictures. Activations to reward predictive stimuli occurred in reward regions, namely the medial orbitofrontal cortex and midbrain. In Experiment 2, we parametrically varied the amount of visually presented hypothetical monetary reward keeping constant the amount of actually received reward. Graded activation in midbrain was observed to stimuli predicting increasing hypothetical rewards. The results demonstrate the efficacy of using hypothetical monetary rewards in fMRI studies.
We have developed a convolutional neural network for the purpose of recognizing facial expressions in human beings. We have fine-tuned the existing convolutional neural network model trained on the visual recognition dataset used in the ILSVRC2012 to two widely used facial expression datasets -CFEE and RaFD, which when trained and tested independently yielded test accuracies of 74.79% and 95.71%, respectively. Generalization of results was evident by training on one dataset and testing on the other. Further, the image product of the cropped faces and their visual saliency maps were computed using Deep Multi-Layer Network for saliency prediction and were fed to the facial expression recognition CNN. In the most generalized experiment, we observed the top-1 accuracy in the test set to be 65.39%. General confusion trends between different facial expressions as exhibited by humans were also observed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.