Evidence of attentional atypicalities for faces in Autism Spectrum Disorders (ASD) are far from being confirmed. Using eye-tracking technology we compared space-based and object-based attention in children with, and without, a diagnosis of ASD. By capitalizing on Egly’s paradigm, we presented two objects (2 faces and their phase-scrambled equivalent) and cued a location in one of the two objects. Then, a target appeared at the same location as the cue (Valid condition), or at a different location within the same object (Same Object condition), or at a different location in another object (Different Object condition). The attentional benefit/cost in terms of time for target detection in each of the three conditions was computed. The findings revealed that target detection was always faster in the valid condition than in the invalid condition, regardless of the type of stimulus and the group of children. Thus, no difference emerged between the two groups in terms of space-based attention. Conversely the two groups differed in object-based attention. Children without a diagnosis of ASD showed attentional shift cost with phase-scrambled stimuli, but not with faces. Instead, children with a diagnosis of ASD deployed similar attentional strategies to focus on faces and their phase-scrambled version.
Much of our basic understanding of cognitive and social processes in infancy relies on measures of looking time, and specifically on infants’ visual preference for a novel or familiar stimulus. However, despite being the foundation of many behavioral tasks in infant research, the determinants of infants’ visual preferences are poorly understood, and differences in the expression of preferences can be difficult to interpret. In this large-scale study, we test predictions from the Hunter and Ames model of infants' visual preferences. We investigate the effects of three factors predicted by this model to determine infants’ preference for novel versus familiar stimuli: age, stimulus familiarity, and stimulus complexity. Drawing from a large and diverse sample of infant participants (N = XX), this study will provide crucial empirical evidence for a robust and generalizable model of infant visual preferences, leading to a more solid theoretical foundation for understanding the mechanisms that underlie infants’ responses in common behavioral paradigms. Moreover, our findings will guide future studies that rely on infants' visual preferences to measure cognitive and social processes.
Speech preferences emerge very early in infancy, pointing to a special status for speech in auditory processing and a crucial role of prosody in driving infant preferences. Recent theoretical models suggest that infant auditory perception may initially encompass a broad range of human and nonhuman vocalizations, then tune in to relevant sounds for the acquisition of species‐specific communication sounds. However, little is known about sound properties eliciting infants’ tuning‐in to speech. To address this issue, we presented a group of 4‐month‐olds with segments of non‐native speech (Mandarin Chinese) and birdsong, a nonhuman vocalization that shares some prosodic components with speech. A second group of infants was presented with the same segment of birdsong paired with Mandarin played in reverse. Infants showed an overall preference for birdsong over non‐native speech. Moreover, infants in the Backward condition preferred birdsong over backward speech whereas infants in the Forward condition did not show clear preference. These results confirm the prominent role of prosody in early auditory processing and suggest that infants’ preferences may privilege communicative vocalizations featured by certain prosodic dimensions regardless of the biological source of the sound, human or nonhuman.
Do novel linguistic labels have privileged access to attentional resources compared to non-linguistic labels? This study explores this possibility through two experiments with a training and an attentional overlap task. Experiment 1 investigates how novel label and object-only stimuli influence resource allocation and disengagement of visual attention. Experiment 2 tests the impact of linguistic information on visual attention by comparing novel tones and labels. Because disengagement of attention is affected both by the saliency of the perceptual stimulus and by the degree of familiarity with the stimulus to be disengaged from, we compared pupil size variations and saccade latency under different test conditions: (i) consistent with (i.e., identical to) the training; (ii) inconsistent with the training (i.e., with an altered feature), and (iii) deprived of one feature only in Experiment 1. Experiment 1 indicated a general consistency advantage (and deprived disadvantage) driven by linguistic label-object pairs compared to object-only stimuli. Experiment 2 revealed that tone-object pairs led to higher pupil dilation and longer saccade latency than linguistic label-object pairs. Our results suggest that novel linguistic labels preferentially impact the early orienting of attention.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.