Interference is frequently observed during bimanual movements if the two hands perform nonsymmetric actions. We examined the source of bimanual interference in two experiments in which we compared conditions involving symmetric movements with conditions in which the movements were of different amplitudes or different directions. The target movements were cued either symbolically by letters or directly by the onset of the target locations. With symbolic cues, reaction times were longer when the movements of the two hands were not symmetric. With direct cues, reaction times were the same for symmetric and nonsymmetric movements. These results indicate that directly cued actions can be programmed in parallel for the two hands. Our results challenge the hypothesis that the cost to initiate nonsymmetric movements is due to spatial intetference in a motor-programming stage. Rather the cost appears to be caused by stimulus identification, response-selection processes connected to the processing of symbolic cues, or both.
Four experiments investigated the memory distortions for the location of a dot in relation to two horizontally aligned landmarks. In Experiment 1, participants reproduced from memory a dot location with respect to the two landmarks. Their performance showed a systematic pattern of distortion that was consistent across individual participants. The three subsequent experiments investigated the time course of spatial memory distortions. Using a visual discrimination task, we were able to map the emergence of spatial distortions within the first 800 msec of the retention interval. After retention intervals as brief as 50 msec, a distortion was already present. In all but one experiment, the distortion increased with longer retention intervals. This early onset of spatial memory distortions might reflect the almost immediate decay of detailed spatial information and the early influence of an enduring spatial memory representation, which encodes spatial information in terms of the perceived structure of space.
Representational models specify how complex patterns of neural activity relate to visual stimuli, motor actions, or abstract thoughts. Here we review pattern component modeling (PCM), a practical Bayesian approach for evaluating such models. Similar to encoding models, PCM evaluates the ability of models to predict novel brain activity patterns. In contrast to encoding models, however, the activity of individual voxels across conditions (activity profiles) are not directly fitted. Rather, PCM integrates over all possible activity profiles and computes the marginal likelihood of the data under the activity profile distribution specified by the representational model. By using an analytical expression for the marginal likelihood, PCM allows the fitting of flexible representational models, in which the relative strength and form of the encoded features can be estimated from the data. We discuss here a number of different forms with which such flexible representational models can be specified, and how models of different complexity can be compared. We then provide a number of practical examples from our recent work in motor control, ranging from fixed models to more complex non-linear models of brain representations. The code for the fitting and cross-validation of representational models is provided in a open-source Matlab toolbox.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.