The passing of time can be precisely measured by using clocks, whereas humans’ estimation of temporal durations is influenced by many physical, cognitive and contextual factors, which distort our internal clock. Although it has been shown that temporal estimation accuracy is impaired by non-temporal tasks performed at the same time, no studies have investigated how concurrent cognitive and motor tasks interfere with time estimation. Moreover, most experiments only tested time intervals of a few seconds. In the present study, participants were asked to perform cognitive tasks of different difficulties (look, read, solve simple and hard mathematical operations) and estimate durations of up to two minutes, while walking or sitting. The results show that if observers pay attention only to time without performing any other mental task, they tend to overestimate the durations. Meanwhile, the more difficult the concurrent task, the more they tend to underestimate the time. These distortions are even more pronounced when observers are walking. Estimation biases and uncertainties change differently with durations depending on the task, consistent with a fixed relative uncertainty. Our findings show that cognitive and motor systems interact non-linearly and interfere with time perception processes, suggesting that they all compete for the same resources.
In naturalistic conditions, objects in the scene may be partly occluded and the visual system has to recognize the whole image based on the little information contained in some visible fragments. Previous studies demonstrated that humans can successfully recognize severely occluded images, but the underlying mechanisms occurring in the early stages of visual processing are still poorly understood. The main objective of this work is to investigate the contribution of local information contained in a few visible fragments to image discrimination in fast vision. It has been already shown that a specific set of features, predicted by a constrained maximum-entropy model to be optimal carriers of information (optimal features), are used to build simplified early visual representations (primal sketch) that are sufficient for fast image discrimination. These features are also considered salient by the visual system and can guide visual attention when presented isolated in artificial stimuli. Here, we explore whether these local features also play a significant role in more natural settings, where all existing features are kept, but the overall available information is drastically reduced. Indeed, the task requires discrimination of naturalistic images based on a very brief presentation (25 ms) of a few small visible image fragments. In the main experiment, we reduced the possibility to perform the task based on global-luminance positional cues by presenting randomly inverted-contrast images, and we measured how much observers’ performance relies on the local features contained in the fragments or on global information. The size and the number of fragments were determined in two preliminary experiments. Results show that observers are very skilled in fast image discrimination, even when a drastic occlusion is applied. When observers cannot rely on the position of global-luminance information, the probability of correct discrimination increases when the visible fragments contain a high number of optimal features. These results suggest that such optimal local information contributes to the successful reconstruction of naturalistic images even in challenging conditions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.