Following an active task, the memory representations for used and unused objects are different. However, it is not clear whether these differences arise due to prioritizing objects that are task-relevant, objects that are physically interacted with, or a combination of the two factors. The present study allowed us to tease apart the relative importance of task-relevance and physical manipulation on object memory. A paradigm was designed in which objects were either necessary to complete a task (target), moved out of the way (obstructing, but interacted with), or simply present in the environment (background). Participants' eye movements were recorded with a portable tracker during the task, and they received a memory test on the objects after the task was completed. Results showed that manipulating an object is sufficient to change how information is extracted and retained from fixations, compared to background objects. Taskrelevance provides an additional influence: information is accumulated and retained differently for manipulated target objects than manipulated obstructing objects. These findings demonstrate that object memory is influenced both by whether we physically interact with an object, and the relevance of that object to our behavioral goals.Keywords Action . Perception . Memory . Intention .
Real-worldIn everyday life, we perform actions in order to achieve a goal, informed by perceptual input about the current state of the environment. Despite this strong link between perception and action, it is only more recently that this has been represented in cognitive theories; in particular, those of grounded cognition. There are multiple iterations of these theories (see e.g., Wilson, 2002), but they all make the same underlying proposal: that perception and action are more than low-level input/output devices: instead, they are both dynamic and inter-related processes which allow us to navigate and work within our environment.There is growing evidence that action influences perception. Laboratory-based tasks have demonstrated how preparing an action can affect the performance of visual search tasks: for example, Fagioli, Hommel, and Schubotz (2007) showed that, depending on the gesture prepared prior to search (either pointing or grasping), detection of an oddball target stimulus was facilitated if the gesture was compatible with the relevant feature of the target. Thus, preparing pointing actions facilitated the detection of location-defined stimuli, while preparing grasping actions assisted the detection of size-defined stimuli. Similar results were found in a change detection study (Symes, Tucker, Ellis, Vainio, and Ottoboni, 2008). By preparing a particular grip type (power or precision), changes made to stimuli compatible with the size of the grip aperture were spotted faster and more accurately than changes to the grip-incompatible objects. In both cases, it is apparent that the mere preparation of a particular gesture is enough to bias attention to those items in a scene containing some fe...