In everyday behavior, two of the most common visually guided actions—eye and hand movements—can be performed independently, but are often synergistically coupled. In this study, we examine whether the same visual representation is used for different stages of saccades and pointing, namely movement preparation and execution, and whether this usage is consistent between independent and naturalistic coordinated eye and hand movements. To address these questions, we used the Ponzo illusion to dissociate the perceived and physical sizes of visual targets and measured the effects on movement preparation and execution for independent and coordinated saccades and pointing. During independent movements, we demonstrated that both physically and perceptually larger targets produced faster preparation for both effectors. Furthermore, participants who showed a greater influence of the illusion on saccade preparation also showed a greater influence on pointing preparation, suggesting that a shared mechanism involved in preparation across effectors is influenced by illusions. However, only physical but not perceptual target sizes influenced saccade and pointing execution. When pointing was coordinated with saccades, we observed different dynamics: pointing no longer showed modulation from illusory size, while saccades showed illusion modulation for both preparation and execution. Interestingly, in independent and coordinated movements, the illusion modulated saccade preparation more than pointing preparation, with this effect more pronounced during coordination. These results suggest a shared mechanism, dominated by the eyes, may underlie visually guided action preparation across effectors. Furthermore, the influence of illusions on action may operate within such a mechanism, leading to dynamic interactions between action modalities based on task demands.
In daily life, two aspects of real-world object size perception-the image size of an object and its familiar size in the real world-are highly correlated. Thus, whether these two aspects of object size differently affect goal-directed action (e.g., manual pointing) and how have scarcely been examined. Here, participants reached to touch one of two simultaneously presented objects based on either their image or familiar size, which could be congruent or incongruent (e.g., a rubber duck presented as smaller and larger than a boat, respectively). We observed that when pointing to target objects in the incongruent conditions, participants' movements were slower and were more curved toward the incorrect object compared with the movements in the congruent conditions. By comparing performance in the congruent and incongruent conditions, we concluded that both image size and familiar size influenced action even when task irrelevant, indicating that both are processed automatically (Konkle & Oliva, 2012a). Image size, however, showed influence earlier in the course of movements and more robustly overall than familiar size. We additionally found that greater relative familiar size differences mitigated the impact of image size processing and increased the impact of familiar size processing on pointing movements. Overall, our data suggest that image size and familiar size perception interact both with each other and with visually guided action, but that the relative contributions of each are unequal and vary based on task demands. size of a rubber duck, the size that we know it typically would be based on past experience, and the image size of "Rubber Duck," the size the piece appeared visually to viewers, were dramatically in conflict.In normal daily life, the familiar size and image size of objects are highly correlated. When presented in the same context, real-world objects that we know to be relatively small in the real world, such as rubber ducks, typically appear smaller than objects such as boats, which we know to be larger. Even when the sizes of objects on the retina vary, they are integrated with their environment via size constancy mechanisms. Thus, taking size constancy into account, in the real world image size and familiar size are very rarely in conflict. Consequently, image size and familiar size processing are highly confounded-when we see an object and its image size and familiar size are congruent, how each of these two aspects of size affects our perception and goal-directed action is difficult to disambiguate.Attempts have recently been made to disentangle image size and familiar size perception. Konkle and Oliva (2012a) implemented a Stroop-like paradigm, in which pairs of two objects were presented at different image sizes on the screen. Participants were asked to indicate which image size was bigger or smaller by key press, while their familiar sizes were task irrelevant. This experiment demonstrated that incongruence between familiar size and image size (e.g., a rubber duck presented with a larger i...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.