The three experiments reported here demonstrated a cross-modal influence of an auditory rhythm on the temporal allocation of visual attention. In Experiment 1, participants moved their eyes to a test dot with a temporal onset that was either synchronous or asynchronous with a preceding auditory rhythm. Saccadic latencies were faster for the synchronous condition than for the asynchronous conditions. In Experiment 2, the effect was replicated in a condition in which the auditory context stopped prior to the onset of the test dot, and the effect did not occur in a condition in which auditory tones were presented at irregular intervals. Experiment 3 replicated the effect using an accuracy measure within a nontimed visual task. Together, the experiments' findings support a general entrainment perspective on attention to events over time.
People use salient landmarks when learning a route through a novel environment. However, it is not clear what makes a given landmark salient. In two experiments, subjects learned a route through a virtual museum, performed a recognition memory test for objects in the museum, and provided spatial descriptions and drew maps of the learned route. Objects with strong perceptual features occurred at decision points or at non-decision points along the route. Objects with both of these features were recognized faster and were included more often in the maps and written directions. When these features were separated, perceptual features maintained a strong influence on the recognition task, but had no influence on the spatial tasks, which were influenced only by spatial features. These findings challenge the idea that either a recognition task or descriptive task alone provides a complete account of landmark representation.
This contribution presents a corpus of spatial descriptions and describes the development of a human-driven spatial language robot system for their comprehension. The domain of application is an eldercare setting in which an assistive robot is asked to "fetch" an object for an elderly resident based on a natural language spatial description given by the resident. In Part One, we describe a corpus of naturally occurring descriptions elicited from a group of older adults within a virtual 3D home that simulates the eldercare setting. We contrast descriptions elicited when participants offered descriptions to a human versus robot avatar, and under instructions to tell the addressee how to find the target versus where the target is. We summarize the key features of the spatial descriptions, including their dynamic versus static nature and the perspective adopted by the speaker. In Part Two, we discuss critical cognitive and perceptual processing capabilities necessary for the robot to establish a common ground with the human user and perform the "fetch" task. Based on the collected corpus, we focus here on resolving the perspective ambiguity and recognizing furniture items used as landmarks in the descriptions. Taken together, the work presented here offers the key building blocks of a robust system that takes as input natural spatial language descriptions and produces commands that drive the robot to successfully fetch objects within our eldercare scenario.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.