Manual skills slowly develop throughout infancy and have been shown to create clear views of objects that provide better support for visually sustained attention, recognition, memory, and learning. These clear views may coincide with the development of manual skills, or that social scaffolding supports clear viewing experiences like those generated by toddlers during active object exploration. This study used a head‐mounted eye tracker to record 5‐ to 24‐month‐olds’ object views during repeated mother‐infant play sessions (Ns = 18). Results show an early beginning of scaffolding in which parents generate views similar to those of older infants and toddlers, resulting in increased fixations to objects. The finding implicates parents as early scaffolders of object attention and learning.
The present study focused on parents’ social cue use in relation to young children's attention. Participants were ten parent–child dyads; all children were 36 to 60 months old and were either typically developing (TD) or were diagnosed with autism spectrum disorder (ASD). Children wore a head-mounted camera that recorded the proximate child view while their parent played with them. The study compared the following between the TD and ASD groups: (a) frequency of parent's gesture use; (b) parents’ monitoring of their child's face; and (c) how children looked at parents’ gestures. Results from Bayesian estimation indicated that, compared to the TD group, parents of children with ASD produced more gestures, more closely monitored their children's faces, and provided more scaffolding for their children's visual experiences. Our findings suggest the importance of further investigating parents’ visual and gestural scaffolding as a potential developmental mechanism for children's early learning, including for children with ASD.
The contextual cueing effect is a robust phenomenon in which repeated exposure to the same arrangement of random elements guides attention to relevant information by constraining search. The effect is measured using an object search task in which a target (e.g., the letter T) is located within repeated or nonrepeated visual contexts (e.g., configurations of the letter L). Decreasing response times for the repeated configurations indicates that contextual information has facilitated search. Although the effect is robust among adult participants, recent attempts to document the effect in children have yielded mixed results. We examined the effect of search speed on contextual cueing with school-aged children, comparing three types of stimuli that promote different search times in order to observe how speed modulates this effect. Reliable effects of search time were found, suggesting that visual search speed uniquely constrains the role of attention toward contextually cued information.
What we attend to at any moment determines what we learn at that moment, and this also depends on our past learning. This focused conceptual paper concentrates on a single well-documented attention mechanism – highlighting. This phenomenon – well studied in non-linguistic but not in linguistic contexts – should be highly relevant to language learning because it is a process that (1) specifically protects past learning from being disrupted by new (and potentially spurious) associations in the learning environment, and (2) strongly constrains new learning to new information. Within the language learning context, highlighting may disambiguate ambiguous references and may be related to processes of lexical competition that are known to be critical to on-line sentence comprehension. The main sections of the paper will address (1) the highlighting phenomenon in the literature; (2) its relevancy to language learning; (3) the highlighting effect in children; (4) developmental studies concerning the effect in different contexts; and (5) a developmental mechanism for highlighting in language learning.
Despite the sparse visual information and paucity of self-identifying cues provided by point-light stimuli, as well as a dearth of experience in seeing our own-body movements, people can identify themselves solely based on the kinematics of body movements. The present study found converging evidence of this remarkable ability using a broad range of actions with whole-body movements. In addition, we found that individuals with a high degree of autistic traits showed worse performance in identifying own-body movements, particularly for simple actions. A Bayesian analysis showed that action complexity modulates the relationship between autistic traits and self-recognition performance. These findings reveal the impact of autistic traits on the ability to represent and recognize own-body movements.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.