Deriving disease subtypes from electronic health records (EHRs) can guide next-generation personalized medicine. However, challenges in summarizing and representing patient data prevent widespread practice of scalable EHR-based stratification analysis. Here we present an unsupervised framework based on deep learning to process heterogeneous EHRs and derive patient representations that can efficiently and effectively enable patient stratification at scale. We considered EHRs of 1,608,741 patients from a diverse hospital cohort comprising a total of 57,464 clinical concepts. We introduce a representation learning model based on word embeddings, convolutional neural networks, and autoencoders (i.e., ConvAE) to transform patient trajectories into lowdimensional latent vectors. We evaluated these representations as broadly enabling patient stratification by applying hierarchical clustering to different multi-disease and disease-specific patient cohorts. ConvAE significantly outperformed several baselines in a clustering task to identify patients with different complex conditions, with 2.61 entropy and 0.31 purity average scores. When applied to stratify patients within a certain condition, ConvAE led to various clinically relevant subtypes for different disorders, including type 2 diabetes, Parkinson's disease, and Alzheimer's disease, largely related to comorbidities, disease progression, and symptom severity. With these results, we demonstrate that ConvAE can generate patient representations that lead to clinically meaningful insights. This scalable framework can help better understand varying etiologies in heterogeneous sub-populations and unlock patterns for EHR-based research in the realm of personalized medicine.
The recognition of emotional body movement (BM) is impaired in individuals with Autistic Spectrum Disorder ASD, yet it is not clear whether the difficulty is related to the encoding of body motion, emotions, or both. Besides, BM recognition has been traditionally studied using point-light displays stimuli (PLDs) and is still underexplored in individuals with ASD and intellectual disability (ID). In the present study, we investigated the recognition of happy, fearful, and neutral BM in children with ASD with and without ID. In a non-verbal recognition task, participants were asked to recognize pure-bodymotion and visible-body-form stimuli (by means of point-light displays-PLDs and full-light displays-FLDs, respectively). We found that the children with ASD were less accurate than TD children in recognizing both the emotional and neutral BM, either when presented as FLDs or PLDs. These results suggest that the difficulty in understanding the observed BM may rely on atypical processing of BM information rather than emotion. Moreover, we found that the accuracy improved with age and IQ only in children with ASD without ID, suggesting that high level of cognitive resources can mediate the acquisition of compensatory mechanisms which develop with age.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.