When the apparent visual location of a body part conflicts with its veridical location, vision can dominate proprioception and kinesthesia. In this article, we show that vision can capture tactile localization. Participants discriminated the location of vibrotactile stimuli (upper, at the index finger, vs. lower, at the thumb), while ignoring distractor lights that could independently be upper or lower. Such tactile discriminations were slowed when the distractor light was incongruent with the tactile target (e.g., an upper light during lower touch) rather than congruent, especially when the lights appeared near the stimulated hand. The hands were occluded under a table, with all distractor lights above the table. The effect of the distractor lights increased when rubber hands were placed on the table, "holding" the distractor lights, but only when the rubber hands were spatially aligned with the participant's own hands. In this aligned situation, participants were more likely to report the illusion of feeling touch at the rubber hands. Such visual capture of touch appears cognitively impenetrable.
Across three experiments, participants made speeded elevation discrimination responses to vibrotactile targets presented to the thumb (held in a lower position) or the index finger (upper position) of either hand, while simultaneously trying to ignore visual distractors presented independently from either the same or a different elevation. Performance on the vibrotactile elevation discrimination task was slower and less accurate when the visual distractor was incongruent with the elevation of the vibrotactile target (e.g., a lower light during the presentation of an upper vibrotactile target to the index finger) than when they were congruent, showing that people cannot completely ignore vision when selectively attending to vibrotactile information. We investigated the attentional, temporal, and spatial modulation of these cross-modal congruency effects by manipulating the direction of endogenous tactile spatial attention, the stimulus onset asynchrony between target and distractor, and the spatial separation between the vibrotactile target, any visual distractors, and the participant's two hands within and across hemifields. Our results provide new insights into the spatiotemporal modulation of cross-modal congruency effects and highlight the utility of this paradigm for investigating the contributions of visual, tactile, and proprioceptive inputs to the multisensory representation of peripersonal space.
In a study that builds on recent cognitive neuroscience research on body perception and social psychology research on social relations, we tested the hypothesis that synchronous multisensory stimulation leads to self-other merging. We brushed the cheek of each study participant as he or she watched a stranger's cheek being brushed in the same way, either in synchrony or in asynchrony. We found that this multisensory procedure had an effect on participants' body perception as well as social perception. Study participants exposed to synchronous stimulation showed more merging of self and the other than participants exposed to asynchronous stimulation. The degree of self-other merging was determined by measuring participants' body sensations and their perception of face resemblance, as well as participants' judgment of the inner state of the other, closeness felt toward the other, and conformity behavior. The results of this study show how multisensory integration can affect social perception and create a sense of self-other similarity.
The authors report a series of 6 experiments investigating crossmodal links between vision and touch in covert endogenous spatial attention. When participants were informed that visual and tactile targets were more likely on 1 side than the other, speeded discrimination responses (continuous vs. pulsed, Experiments 1 and 2; or up vs. down, Experiment 3) for targets in both modalities were significantly faster on the expected side, even though target modality was entirely unpredictable. When participants expected a target on a particular side in just one modality, corresponding shifts of covert attention also took place in the other modality, as evidenced by faster elevation judgments on that side (Experiment 4). Larger attentional effects were found when directing visual and tactile attention to the same position rather than to different positions (Experiment 5). A final study with crossed hands revealed that these visuotactile links in spatial attention apply to common positions in external space.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.