The goal of this paper's research is to develop learning methods that promote the automatic analysis and interpretation of human and mime-gestural movement from various perspectives and using various data sources images, video, depth, mocap data, audio, and inertial sensors, for example. Deep neural models are used as well as supervised classification and semi-supervised feature learning modeling temporal dependencies, and their effectiveness in a set of tasks that are fundamental, such as detection, classification, and parameter estimation, is demonstrated as well as user verification. A method for identifying and classifying human actions and gestures based on utilizing multi-dimensional and multi-modal deep learning from visual signals (for example, live stream, depth, and motion - based data). A training strategy that uses, first, individual modalities must be carefully initialized, followed by gradual fusion (called ModDrop) to learn correlations between modalities while preserving the uniqueness of each modality specific representation. In addition, the suggested ModDrop training approach assures that the classifier detect has weak inputs for one or maybe more channels, enabling these to make valid predictions from any amount of data points accessible modalities. In this paper, inertial sensors (such as accelerometers and gyroscopes) embedded in mobile devices collect data are also used.
BACKGROUND Autism Spectrum Disorder (ASD) is a neurodevelopmental disability that is becoming increasingly prevalent, which has led to considerable research being focused on the therapy of individuals with autism, especially children, bearing in mind that early diagnosis and appropriate treatment can lead to an improvement in the condition. With the widespread availability of virtual/augmented/mixed reality (VR/AR/MR) technologies to the general public and the increasing popularity of mobile devices, there is a growing interest in the use of applications and technologies to provide support for the therapy of children with autism. OBJECTIVE The objective of this study was to investigate the potential of virtual/ augmented/ mixed reality technologies in the context of therapy for children with autism spectrum disorder and to conduct a systematic review of the literature on the development of mobile applications based on these technologies. METHODS For the systematic literature review, six research questions were defined in the first phase, after which five international databases (Web of Science, Scopus, Science Direct, IEEE Xplore Digital Library, and ACM Digital Library) were searched using specific search strings. Results were centralized, filtered, and processed, applying eligibility criteria and using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) method. The publications considered for the present study were carefully analyzed, and research questions were answered. RESULTS In the first step, 179 publications found during the search were imported. After reviewing the title and abstract, a total of 89 publications were excluded because they did not address the proposed topic, focusing either on another condition or on other technologies. Next, for the analysis of the remaining 90 publications, the full text has been reviewed and the quality assessment criteria have been applied. The eligibility criteria were applied to 78 publications. After quality assessment, a total of 28 publications were considered in this study, in order to answer the research questions. CONCLUSIONS Although the concept of augmented/ virtual/ mixed reality is not exactly new, it has only recently begun to be used in the development of applications for the therapy of children with ASD. The findings reported in various publications indexed in five scientific databases highlighted the fact that these technologies are appropriate for this type of therapy, which motivates the in-depth study of this topic and the development of future applications based on these technologies. Several studies show a distinct trend toward the use of augmented reality technology as an educational tool for people with ASD. This trend entails multidisciplinary cooperation and an integrated approach to research, with an emphasis on comprehensive empirical evaluations and technology ethics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.