The core symptoms of autism spectrum disorder (ASD) mainly relate to social communication and interactions. ASD assessment involves expert observations in neutral settings, which introduces limitations and biases related to lack of objectivity and does not capture performance in real-world settings. To overcome these limitations, advances in technologies (e.g., virtual reality) and sensors (e.g., eye-tracking tools) have been used to create realistic simulated environments and track eye movements, enriching assessments with more objective data than can be obtained via traditional measures. This study aimed to distinguish between autistic and typically developing children using visual attention behaviors through an eye-tracking paradigm in a virtual environment as a measure of attunement to and extraction of socially relevant information. The 55 children participated. Autistic children presented a higher number of frames, both overall and per scenario, and showed higher visual preferences for adults over children, as well as specific preferences for adults' rather than children's faces on which looked more at bodies. A set of multivariate supervised machine learning models were developed using recursive feature selection to recognize ASD based on extracted eye gaze features. The models achieved up to 86% accuracy (sensitivity = 91%) in recognizing autistic children. Our results should be taken as preliminary due to the relatively small sample size and the lack of an external replication dataset. However, to our knowledge, this constitutes a first proof of concept in the combined use of virtual reality, eye-tracking tools, and machine learning for ASD recognition. Lay SummaryCore symptoms in children with ASD involve social communication and interaction. ASD assessment includes expert observations in neutral settings, which show limitations and biases related to lack of objectivity and do not capture performance in real settings. To overcome these limitations, this work aimed to distinguish between autistic and typically developing children in visual attention behaviors through an eye-tracking paradigm in a virtual environment as a measure of attunement to, and extraction of, socially relevant information.
Objective To create an electronic frailty index (eFRAGICAP) using electronic health records (EHR) in Catalunya (Spain) and assess its predictive validity with a two-year follow-up of the outcomes: homecare need, institutionalization and mortality in the elderly. Additionally, to assess its concurrent validity compared to other standardized measures: the Clinical Frailty Scale (CFS) and the Risk Instrument for Screening in the Community (RISC). Methods The eFRAGICAP was based on the electronic frailty index (eFI) developed in United Kingdom, and includes 36 deficits identified through clinical diagnoses, prescriptions, physical examinations, and questionnaires registered in the EHR of primary health care centres (PHC). All subjects > 65 assigned to a PHC in Barcelona on 1st January, 2016 were included. Subjects were classified according to their eFRAGICAP index as: fit, mild, moderate or severe frailty. Predictive validity was assessed comparing results with the following outcomes: institutionalization, homecare need, and mortality at 24 months. Concurrent validation of the eFRAGICAP was performed with a sample of subjects (n = 333) drawn from the global cohort and the CFS and RISC. Discrimination and calibration measures for the outcomes of institutionalization, homecare need, and mortality and frailty scales were calculated. Results 253,684 subjects had their eFRAGICAP index calculated. Mean age was 76.3 years (59.5% women). Of these, 41.1% were classified as fit, and 32.2% as presenting mild, 18.7% moderate, and 7.9% severe frailty. The mean age of the subjects included in the validation subsample (n = 333) was 79.9 years (57.7% women). Of these, 12.6% were classified as fit, and 31.5% presented mild, 39.6% moderate, and 16.2% severe frailty. Regarding the outcome analyses, the eFRAGICAP was good in the detection of subjects who were institutionalized, required homecare assistance, or died at 24 months (c-statistic of 0.841, 0.853, and 0.803, respectively). eFRAGICAP was also good in the detection of frail subjects compared to the CFS (AUC 0.821) and the RISC (AUC 0.848). Conclusion The eFRAGICAP has a good discriminative capacity to identify frail subjects compared to other frailty scales and predictive outcomes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.