No abstract
We investigated the processing effort during visual search and counting tasks using a pupil dilation measure. Search difficulty was manipulated by varying the number of distractors as well as the heterogeneity of the distractors. More difficult visual search resulted in more pupil dilation than did less difficult search. These results confirm a link between effort and increased pupil dilation. The pupil dilated more during the counting task than during target-absent search, even though the displays were identical, and the two tasks were matched for reaction time. The moment-to-moment dilation pattern during search suggests little effort in the early stages, but increasingly more effort towards response, whereas the counting task involved an increased initial effort, which was sustained throughout the trial. These patterns can be interpreted in terms of the differential memory load for item locations in each task. In an additional experiment, increasing the spatial memory requirements of the search evoked a corresponding increase in pupil dilation. These results support the view that search tasks involve some, but limited, memory for item locations, and the effort associated with this memory load increases during the trials. In contrast, counting involves a heavy locational memory component from the start.
We used an exogenous target detection cueing paradigm to examine whether intra-individual reaction time variability (IIV) or phasic alerting varied significantly between patients with amnestic mild cognitive impairment (aMCI) (n = 45) and healthy older adult controls (n = 31) or between those with aMCI who, within a 2.5 year follow-up period, developed dementia (n = 13) and those who did not (n = 26). Neither IIV, nor simple reaction time, differentiated aMCI from healthy aging, indicating that raised IIV and overall response slowing are not general characteristics of aMCI. However, within the aMCI group, IIV did differentiate between those who converted to dementia and those who remained with a diagnosis of aMCI (non-converters), being significantly more variable in those who later developed dementia. Furthermore, there was no difference in IIV between non-converters and healthy controls. High IIV appears related to an increased probability that an individual with aMCI will become demented within 2.5 years, rather than to amnestic dysfunction per se. In contrast, phasic alerting performance significantly differentiated aMCI from healthy aging, but failed to discriminate those with aMCI who developed dementia from those who did not. In addition, those patients with aMCI who did not develop dementia still showed a significantly poorer phasic alerting effect compared to healthy aging. The phasic alerting abnormality in aMCI compared to healthy aging does not appear specifically related to the performance of those patients for whom aMCI represents the prodromal stages of dementia.
How do we visually encode facial expressions? Is this done by viewpoint-dependent mechanisms representing facial expressions as two-dimensional templates or do we build more complex viewpoint independent three-dimensional representations? Recent facial adaptation techniques offer a powerful way to address these questions. Prolonged viewing of a stimulus (adaptation) changes the perception of subsequently viewed stimuli (an after-effect). Adaptation to a particular attribute is believed to target those neural mechanisms encoding that attribute. We gathered images of facial expressions taken simultaneously from five different viewpoints evenly spread from the three-quarter leftward to the three-quarter rightward facing view. We measured the strength of expression after-effects as a function of the difference between adaptation and test viewpoints. Our data show that, although there is a decrease in after-effect over test viewpoint, there remains a substantial after-effect when adapt and test are at differing three-quarter views. We take these results to indicate that neural systems encoding facial expressions contain a mixture of viewpoint-dependent and viewpoint-independent elements. This accords with evidence from single cell recording studies in macaque and is consonant with a view in which viewpoint-independent expression encoding arises from a combination of view-dependent expression-sensitive responses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.