Impaired audiovisual temporal integration, manifested as an abnormally widened temporal-binding window (TBW) for integrating sensory information, is found in both autism spectrum disorder (ASD) and schizophrenia (SCZ) and contributes to aberrant perceptual experiences and impaired social communication. We conducted two experiments using age-comparable samples of participants with early-onset SCZ and participants with ASD. Sophisticated paradigms, including a unisensory temporal-order-judgment task (TOJ), an audiovisual-simultaneity-judgment task (SJ), and an eye-tracking task were used. Results showed generalized deficits in temporal processing in SCZ ranging from unisensory to multisensory modalities and from nonspeech to speech stimuli. In contrast, the widened TBW in ASD mainly affected speech stimuli processing. Applying the eye-tracking task with ecologically valid linguistic stimuli, we found that both participants with SCZ and participants with ASD exhibited reduced sensitivity of detecting audiovisual speech asynchrony. This impaired audiovisual speech integration correlated with negative symptoms. Although both ASD and SCZ have impaired multisensory temporal integration, ASD impairs speech-related processing, and SCZ is associated with generalized deficits.
(1) Background: Cough is a major presentation in childhood asthma. Here, we aim to develop a machine-learning based cough sound classifier for asthmatic and healthy children. (2) Methods: Children less than 16 years old were randomly recruited in a Children’s Hospital, from February 2017 to April 2018, and were divided into 2 cohorts—healthy children and children with acute asthma presenting with cough. Children with other concurrent respiratory conditions were excluded in the asthmatic cohort. Demographic data, duration of cough, and history of respiratory status were obtained. Children were instructed to produce voluntary cough sounds. These clinically labeled cough sounds were randomly divided into training and testing sets. Audio features such as Mel-Frequency Cepstral Coefficients and Constant-Q Cepstral Coefficients were extracted. Using a training set, a classification model was developed with Gaussian Mixture Model–Universal Background Model (GMM-UBM). Its predictive performance was tested using the test set against the physicians’ labels. (3) Results: Asthmatic cough sounds from 89 children (totaling 1192 cough sounds) and healthy coughs from 89 children (totaling 1140 cough sounds) were analyzed. The sensitivity and specificity of the audio-based classification model was 82.81% and 84.76%, respectively, when differentiating coughs from asthmatic children versus coughs from ‘healthy’ children. (4) Conclusion: Audio-based classification using machine learning is a potentially useful technique in assisting the differentiation of asthmatic cough sounds from healthy voluntary cough sounds in children.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.