When searching for an object, do we minimize the number of eye movements we need to make? Under most circumstances, the cost of saccadic parsimony likely outweighs the benefit, given the cost is extensive computation and the benefit is a few hundred milliseconds of time saved. Previous research has measured the proportion of eye movements directed to locations where the target would have been visible in the periphery, as a way of quantifying the proportion of superfluous fixations. A surprisingly large range of individual differences has emerged from these studies, suggesting some people are highly efficient and others much less so. Our question in the current study is whether these individual differences can be explained by differences in motivation. In two experiments, we demonstrate that neither time pressure, nor financial incentive, led to improvements of visual search strategies; the majority of participants continued to make many superfluous fixations in both experiments. The wide range of individual differences in efficiency observed previously was replicated here. We observed small but consistent improvements in strategy over the course of the experiment (regardless of reward or time pressure) suggesting practice, not motivation, makes participants more efficient.
When searching for an object, do we minimize the number of eye movements we need to make? Under most circumstances, the cost of saccadic parsimony likely outweighs the benefit, given the cost is extensive computation and the benefit is a few hundred milliseconds of time saved. Previous research has measured the proportion of eye movements directed to locations where the target would have been visible in the periphery, as a way of quantifying the proportion of superfluous fixations. A surprisingly large range of individual differences has emerged from these studies, suggesting some people are highly efficient and others much less so. Our question in the current study is whether these individual differences can be explained by differences in motivation. In two experiments, we demonstrate that neither time pressure, nor financial incentive, led to improvements of visual search strategies; the majority of participants continued to make many superfluous fixations in both experiments. The wide range of individual differences in efficiency observed previously was replicated here. We observed small but consistent improvements in strategy over the course of the experiment (regardless of reward or time pressure) suggesting practice, not motivation, makes participants more efficient.
Visual object recognition is a highly dynamic process by which we extract meaningful information about the things we see. However, the functional relevance and informational properties of feedforward and feedback signals remains largely unspecified. Additionally, it remains unclear whether computational models of vision alone can accurately capture object-specific representations and the evolving spatiotemporal neural dynamics. Here, we probe these dynamics using a combination of representational similarity and connectivity analyses of fMRI and MEG data recorded during the recognition of familiar, unambiguous objects from a wide range of categories. Modelling the visual and semantic properties of our stimuli using an artificial neural network as well as a semantic feature model, we find that unique aspects of the neural architecture and connectivity dynamics relate to visual and semantic object properties. Critically, we show that recurrent processing between anterior and posterior ventral temporal cortex relates to higher-level visual properties prior to semantic object properties, in addition to semantic-related feedback from the frontal lobe to the ventral temporal lobe between 250 and 500ms after stimulus onset. These results demonstrate the distinct contributions semantic object properties make in explaining neural activity and connectivity, highlighting it is a core part of object recognition.
It is well-established that seeing the face of a speaker can substantially improve speech perception, especially under adverse listening conditions. However, previous studies have demonstrated that this audiovisual benefit is highly variable across individuals and measurement indices (Grant & Seitz, 1998; Tye-Murray et al., 2016). Here we present a planned study designed to quantify the audiovisual benefit for acoustically degraded English single phonemes, words and sentences in the general, healthy population alongside pilot data testing audiovisual perception of phonemes and words (N=7, data collection ongoing) and re-analyses of existing behavioural data of audiovisual sentences (N=14). Rather than comparing changes in intelligibility due to adding visual speech (which is prone to floor and ceiling effects) we measure the relative intelligibility of matched audiovisual (AV) and auditory-only (AO) speech. Our study will add to the existing literature by establishing the distribution of audiovisual speech perception skills and benefit in the general population.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.