Considerable evidence now shows that making a reference to the self in a task modulates attention, perception, memory, and decision-making. Furthermore, the self-reference effect (SRE) cannot be reduced to domain-general factors (e.g., reward value) and is supported by distinct neural circuitry. However, it remains unknown whether self-associations modulate response execution as well. This was tested in the present study. Participants carried out a perceptual-matching task, and movement time (MT) was measured separately from reaction-time (RT; drawing on methodology from the literature on intelligence). A response box recorded 'home'-button-releases (measuring RT from stimulus onset); and a target-key positioned 14 cm from the response box recorded MT (from 'home'-button-release to target-key depression). MTs of responses to self- as compared with other-person-associated stimuli were faster (with a higher proportion correct for self-related responses). We present a novel demonstration that the SRE can modulate the execution of rapid-aiming arm-movement responses. Implications of the findings are discussed, along with suggestions to guide and inspire future work in investigating how the SRE influences action.
A wealth of recent research supports the validity of the Self-Prioritization Effect (SPE)—the performance advantage for responses to self-associated as compared with other-person-associated stimuli in a shape–label matching task. However, inconsistent findings have been reported regarding the particular stage(s) of information processing that are influenced. In one account, self-prioritization modulates multiple stages of processing, whereas according to a competing account, self-prioritization is driven solely by a modulation in central-stage information-processing. To decide between these two possibilities, the present study tested whether the self-advantage in arm movements previously reported could reflect a response bias using visual feedback (Experiment 1), or approach motivation processes (Experiments 1 and 2). In Experiment 1, visual feedback was occluded in a ballistic movement-time variant of the matching task, whereas in Experiment 2, task responses were directed away from the stimuli and the participant’s body. The advantage for self in arm-movement responses emerged in both experiments. The findings indicate that the self-advantage in arm-movement responses does not depend on the use of visual feedback or on a self/stimuli-directed response. They further indicate that self-relevance can modulate movement responses (predominantly) using proprioceptive, kinaesthetic, and tactile information. These findings support the view that self-relevance modulates arm-movement responses, countering the suggestion that self-prioritization only influences central-stage processes, and consistent with a multiple-stage influence instead.
95% of the world's population associate a rounded visual shape with the spoken word 'bouba', and an angular visual shape with the spoken word 'kiki', known as the bouba/kiki-effect. The bouba/kiki-effect occurs irrespective of familiarity with either the shape or word. This study investigated the bouba/kiki-effect when using haptic touch instead of vision, including the role of visual imagery. It also investigated whether the bouba/kiki shape-audio regularities are noticed at all, that is, whether they affect the bouba/kiki-effect itself and/or the recognition of individual bouba/kiki shapes, and finally what mental images they produce. Three experiments were conducted, with three groups of participants: blind, blindfold, and vision. In Experiment 1, the participants were asked to pick out the tactile/visual shape that they associated with the auditory bouba/kiki. Experiment 1 found that the participants who were blind did not show an instant bouba/kiki-effect (in Trial 1), whereas the blindfolded and the fully sighted did. It also found that the bouba/kiki shape-audio regularities affected the bouba/kiki-effect when using haptic touch: Those who were blind did show the bouba/kiki-effect from Trial 4, and those who were blindfolded no longer did. In Experiment 2, the participants were asked to name one tactile/visual shape and a segment of audio together as either 'bouba' or 'kiki'. Experiment 2 found that corresponding shape and audio improved the accuracy of both the blindfolded and the fully sighted, but not of those who were blind - they ignored the audio. Finally, in Experiment 3, the participants were asked to draw the shape that they associated with the auditory bouba/kiki. Experiment 3 found that their mental images, as depicted in their drawings, were not affected by whether they had experienced the bouba/kiki shapes by haptic touch or by vision. Regardless of their prior shape experience, that is, tactile or visual, their mental images included the most characteristic shape feature of bouba and kiki: curve and angle, respectively, and typically not the global shape. When taken together, these experiments suggest that the sensory regularities and mental images concerning bouba and kiki do not have to be based on, or even include visual information.
This article presents a protocol for investigating the role of visual imagery in the bouba/kiki-effect, whether training in noticing the bouba/kiki shape-audio regularities affects the bouba/kiki-effect and the recognition of individual bouba and kiki shapes, and finally what mental images these regularities produce. To generate bouba/kiki shape-audio regularities, there were two types of shapes (filled; outlined) and two types of audio (word; non-word sound). Three groups of individuals participated in three experiments: Blind, blindfold, and vision. The experiments were conducted in fixed order across participants, with no break between them. In Experiment 1 (pre-test-post-test design with three repeated withingroup measures) the participants were asked to pick out the shape they associated with the auditory bouba/kiki; in Experiment 2 (within-subject design), to name one shape and some audio (sometimes congruous; sometimes incongruous) as 'bouba' or 'kiki;' and in Experiment 3 (posttest only design), to draw the shape they associated with the auditory bouba/kiki. The results suggest that the blindfold-group draw upon visual imagery to solve new problems, but not long term; that training in noticing bouba/kiki shape-audio regularities affects the bouba/kiki-effect and the recognition of individual bouba and kiki shapes, but differently in each experimental group; and that all experimental groups create mental images of the most characteristic shape feature of bouba (curve) and kiki (angle). In fact, the effect of visual imagery is robust across tasks, but not long term; the effect of learning shape-audio regularities is robust long term, but not across tasks. The presented protocol is appropriate for investigating the effect of visual imagery and learning shape-audio regularities, when they occur and how robust they are; in specific individuals and groups of individuals. This protocol is unique in that it keeps under control both the visual imagery and the sensory information during training and testing.
A shape-label matching task is commonly used to examine the self-advantage in motor reaction-time responses (the Self-Prioritization Effect; SPE). In the present study, auditory labels were introduced, and, for the first time, responses to unisensory auditory, unisensory visual, and multisensory object-label stimuli were compared across block-type (i.e., trials blocked by sensory modality type, and intermixed trials of unisensory and multisensory stimuli). Auditory stimulus intensity was presented at either 50 dB (Group 1) or 70 dB (Group 2). The participants in Group 2 also completed a multisensory detection task, making simple speeded motor responses to the shape and sound stimuli and their multisensory combinations. In the matching task, the SPE was diminished in intermixed trials, and in responses to the unisensory auditory stimuli as compared with the multisensory (visual shape+auditory label) stimuli. In contrast, the SPE did not differ in responses to the unisensory visual and multisensory (auditory object+visual label) stimuli. The matching task was associated with multisensory ‘costs’ rather than gains, but response times to self- versus stranger-associated stimuli were differentially affected by the type of multisensory stimulus (auditory object+visual label or visual shape+auditory label). The SPE was thus modulated both by block-type and the combination of object and label stimulus modalities. There was no SPE in the detection task. Taken together, these findings suggest that the SPE with unisensory and multisensory stimuli is modulated by both stimulus- and task-related parameters within the matching task. The SPE does not transfer to a significant motor speed gain when the self-associations are not task-relevant.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.