This study investigates attention orienting to social stimuli in children with Autism Spectrum Conditions (ASC) during dyadic social interactions taking place in real-life settings. We study the effect of social cues that differ in complexity and distinguish between social cues produced by facial expressions of emotion and those produced during speech. We record the children's gazes using a head-mounted eye-tracking device and report on a detailed and quantitative analysis of the motion of the gaze in response to the social cues. The study encompasses a group of children with ASC from 2 to 11-years old (n = 14) and a group of typically developing (TD) children (n = 17) between 3 and 6-years old. While the two groups orient overtly to facial expressions, children with ASC do so to a lesser extent. Children with ASC differ importantly from TD children in the way they respond to speech cues, displaying little overt shifting of attention to speaking faces. When children with ASC orient to facial expressions, they show reaction times and first fixation lengths similar to those presented by TD children. However, children with ASC orient to speaking faces slower than TD children. These results support the hypothesis that individuals affected by ASC have difficulties processing complex social sounds and detecting intermodal correspondence between facial and vocal information. It also corroborates evidence that people with ASC show reduced overt attention toward social stimuli.
Bronchiolitis is the most common cause of hospitalization of children in the first year of life and pneumonia is the leading cause of infant mortality worldwide. Lung ultrasound technology (LUS) is a novel imaging diagnostic tool for the early detection of respiratory distress and offers several advantages due to its low-cost, relative safety, portability, and easy repeatability. More precise and efficient diagnostic and therapeutic strategies are needed. Deep-learning-based computer-aided diagnosis (CADx) systems, using chest X-ray images, have recently demonstrated their potential as a screening tool for pulmonary disease (such as COVID-19 pneumonia). We present the first computer-aided diagnostic scheme for LUS images of pulmonary diseases in children. In this study, we trained from scratch four state-of-the-art deep-learning models (VGG19, Xception, Inception-v3 and Inception-ResNet-v2) for detecting children with bronchiolitis and pneumonia. In our experiments we used a data set consisting of 5,907 images from 33 healthy infants, 3,286 images from 22 infants with bronchiolitis, and 4,769 images from 7 children suffering from bacterial pneumonia. Using four-fold cross-validation, we implemented one binary classification (healthy vs. bronchiolitis) and one three-class classification (healthy vs. bronchiolitis vs. bacterial pneumonia) out of three classes. Affine transformations were applied for data augmentation. Hyperparameters were optimized for the learning rate, dropout regularization, batch size, and epoch iteration. The Inception-ResNet-v2 model provides the highest classification performance, when compared with the other models used on test sets: for healthy vs. bronchiolitis, it provides 97.75% accuracy, 97.75% sensitivity, and 97% specificity whereas for healthy vs. bronchiolitis vs. bacterial pneumonia, the Inception-v3 model provides the best results with 91.5% accuracy, 91.5% sensitivity, and 95.86% specificity. We performed a gradient-weighted class activation mapping (Grad-CAM) visualization and the results were qualitatively evaluated by a pediatrician expert in LUS imaging: heatmaps highlight areas containing diagnostic-relevant LUS imaging-artifacts, e.g., A-, B-, pleural-lines, and consolidations. These complex patterns are automatically learnt from the data, thus avoiding hand-crafted features usage. By using LUS imaging, the proposed framework might aid in the development of an accessible and rapid decision support-method for diagnosing pulmonary diseases in children using LUS imaging.
In co-located meetings, participants create and share content to establish a common understanding. In this paper, we present a collaborative environment that enables group members to create and share content simultaneously by providing them with different kinds of individual input devices and a shared workspace. We also report on an exploratory study to investigate the influence of the input device used on the shared knowledge produced by the group. The results suggest that driven by the affordances, various input devices complement each other. We thus recommend groups to be equipped with multitude of them to support diverse meeting task demands. Additionally, we observed that groupware usage differs across various phases of the problem-solving activity. This provides implications for the design of collaborative environments to assist each of the respective phases of the task, in order to extend their usefulness for the group.
Abstract-We report on the study of gazes, conducted on children with pervasive developmental disorders (PDD), by using a novel head-mounted eye-tracking device called the WearCam. Due to the portable nature of the WearCam, we are able to monitor naturalistic interactions between the children and adults.The study involved a group of 3 to 11 year-old children (n=13) with PDD compared to a group of typically developing (TD) children (n=13) between 2 and 6-years old. We found significant differences between the two groups, in terms of the proportion and the frequency of episodes of directly looking at faces during the whole set of experiments.We also conducted a differentiated analysis, in two social conditions, of the gaze patterns directed to an adult's face when the adult addressed the child either verbally or through facial expression of emotion. We observe that children with PDD show a marked tendency to look more at the face of the adult when she makes facial expressions rather than when she speaks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.