Background Autism spectrum disorder (ASD) is a neurodevelopmental disorder that results in altered behavior, social development, and communication patterns. In recent years, autism prevalence has tripled, with 1 in 44 children now affected. Given that traditional diagnosis is a lengthy, labor-intensive process that requires the work of trained physicians, significant attention has been given to developing systems that automatically detect autism. We work toward this goal by analyzing audio data, as prosody abnormalities are a signal of autism, with affected children displaying speech idiosyncrasies such as echolalia, monotonous intonation, atypical pitch, and irregular linguistic stress patterns. Objective We aimed to test the ability for machine learning approaches to aid in detection of autism in self-recorded speech audio captured from children with ASD and neurotypical (NT) children in their home environments. Methods We considered three methods to detect autism in child speech: (1) random forests trained on extracted audio features (including Mel-frequency cepstral coefficients); (2) convolutional neural networks trained on spectrograms; and (3) fine-tuned wav2vec 2.0—a state-of-the-art transformer-based speech recognition model. We trained our classifiers on our novel data set of cellphone-recorded child speech audio curated from the Guess What? mobile game, an app designed to crowdsource videos of children with ASD and NT children in a natural home environment. Results The random forest classifier achieved 70% accuracy, the fine-tuned wav2vec 2.0 model achieved 77% accuracy, and the convolutional neural network achieved 79% accuracy when classifying children’s audio as either ASD or NT. We used 5-fold cross-validation to evaluate model performance. Conclusions Our models were able to predict autism status when trained on a varied selection of home audio clips with inconsistent recording qualities, which may be more representative of real-world conditions. The results demonstrate that machine learning methods offer promise in detecting autism automatically from speech without specialized equipment.
Huntington’s disease (HD) is caused by an expanded CAG repeat in the huntingtin gene, yielding a Huntingtin protein with an expanded polyglutamine tract. While experiments with patient-derived induced pluripotent stem cells (iPSCs) can help understand disease, defining pathological biomarkers remains challenging. Here, we used cryogenic electron tomography to visualize neurites in HD patient iPSC-derived neurons with varying CAG repeats, and primary cortical neurons from BACHD, deltaN17-BACHD, and wild-type mice. In HD models, we discovered sheet aggregates in double membrane-bound organelles, and mitochondria with distorted cristae and enlarged granules, likely mitochondrial RNA granules. We used artificial intelligence to quantify mitochondrial granules, and proteomics experiments reveal differential protein content in isolated HD mitochondria. Knockdown of Protein Inhibitor of Activated STAT1 ameliorated aberrant phenotypes in iPSC- and BACHD neurons. We show that integrated ultrastructural and proteomic approaches may uncover early HD phenotypes to accelerate diagnostics and the development of targeted therapeutics for HD.
Successful development of iodinated ligands for various neurotransmitter receptors prompted us to explore the feasability of having iodinated ligands to target amyloid plaques of Alzheimer's disease. Several potential iodinated tracers based on various chemical backbone structures have been successfully prepared and evaluated toward this purpose. High binding affinities for Abeta aggregates were consistently observed for those ligands. However, the desirable in vivo properties were generally missing in the majority of those iodinated ligands. Only ligands with the promising in vitro and in vivo characteristics such as IMPY will likely warrant their success to be potential imaging agents mapping amyloid plaques in living human brain.
Activity recognition computer vision algorithms can be used to detect the presence of autism-related behaviors, including what are termed "restricted and repetitive behaviors", or stimming, by diagnostic instruments. Examples of stimming include hand fapping, spinning, and head banging. One of the most signifcant bottlenecks for implementing such classifers is the lack of sufciently large training sets of human behavior specifc to pediatric developmental delays. The data that do exist are usually recorded with a handheld camera which is itself shaky or even moving, posing a challenge for traditional feature representation approaches for activity detection which capture the camera's motion as a feature. To address these issues, we frst document the advantages and limitations of current feature representation techniques for activity recognition when applied to head banging detection. We then propose a feature representation consisting exclusively of head pose keypoints. We create a computer vision classifer for detecting head banging in home videos using a time-distributed convolutional neural network (CNN) in which a single CNN extracts features from each frame in the input sequence, and these extracted features are fed as input to a long short-term memory (LSTM) network. On the binary task of predicting head banging and no head banging within videos from the Self Stimulatory Behaviour Dataset (SSBD), we reach a mean F1-score of 90.77% using 3-fold cross validation (with individual fold F1-scores of 83.3%, 89.0%, and 100.0%) when ensuring that no child who appeared in the train set was in the test set for all folds. This work documents a successful process for training a computer vision classifer which can detect a particular human motion pattern with few training examples and even when the camera recording the source clip is unstable. The process of engineering useful feature This work is licensed under a Creative Commons Attribution International 4.0 License.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.