When a log burns, it transforms from a block of wood into a pile of ash. Such state changes are among the most dramatic ways objects change, going beyond mere changes of position or orientation. How does the mind represent changes of state? A foundational result in visual cognition is that memory extrapolates the positions of moving objects—a distortion called representational momentum. Here, five experiments ( N = 400 adults) exploited this phenomenon to investigate mental representations in state space. Participants who viewed objects undergoing state changes (e.g., ice melting, logs burning, or grapes shriveling) remembered them as more changed (e.g., more melted, burned, or shriveled) than they actually were. This pattern extended to several types of state changes, went beyond their low-level properties, and even adhered to their natural trajectories in state space. Thus, mental representations of objects actively incorporate how they change—not only in their relation to their environment, but also in their essential qualities.
People make fast and reasonable predictions about the physical behavior of everyday objects. To do so, people may be using principled approximations, similar to models developed by engineers for the purposes of real-time physical simulations. We hypothesize that people use simplified object approximations for tracking and action (the "body" representation), as opposed to fine-grained forms for recognition (the "shape" representation). We used three classic psychophysical tasks (causality perception, collision detection, and change detection) in novel settings that dissociate body and shape. People's behavior across tasks indicates that they rely on approximate bodies for physical reasoning, and that this approximation lies between convex hulls and fine-grained shapes.
A central puzzle the visual system tries to solve is: “what is where?” While a great deal of research attempts to model object recognition (“what”), a comparatively smaller body of work seeks to model object location (“where”), especially in perceiving everyday objects. How do people locate an object, right now, in front of them? In three experiments collecting over 35,000 judgements on stimuli spanning different levels of realism (line drawings, real images, and crude forms), participants clicked “where” an object is, as if pointing to it. We modeled their responses with eight different methods, including both human response-based models (judgements of physical reasoning, spatial memory, free-response “click anywhere” judgements, and judgements of where people would grab the object), and image-based models (uniform distributions over the image, convex hull, saliency map, and medial axis). Physical reasoning was the best predictor of “where,” performing significantly better than even spatial memory and free-response judgements. Our results offer insight into the perception of object locations while also raising interesting questions about the relationship between physical reasoning and visual perception.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.