The inability to maintain balance during varying postural control conditions can lead to falls, a significant cause of mortality and serious injury among older adults. However, our understanding of the underlying dynamical and stochastic processes in human postural control have not been fully explored. To further our understanding of the underlying dynamical processes, we examine a novel conceptual framework for studying human postural control using the center of pressure (COP) velocity autocorrelation function (COP-VAF) and compare its results to Stabilogram Diffusion Analysis (SDA). Eleven healthy young participants were studied under quiet unipedal or bipedal standing conditions with eyes either opened or closed. COP trajectories were analyzed using both the traditional posturographic measure SDA and the proposed COP-VAF. It is shown that the COP-VAF leads to repeatable, physiologically meaningful measures that distinguish postural control differences in unipedal versus bipedal stance trials with and without vision in healthy individuals. More specifically, both a unipedal stance and lack of visual feedback increased initial values of the COP-VAF, magnitude of the first minimum, and diffusion coefficient, particularly in contrast to bipedal stance trials with open eyes. Use of a stochastic postural control model, based on an Ornstein-Uhlenbeck process that accounts for natural weight-shifts, suggests an increase in spring constant and decreased damping coefficient when fitted to experimental data. This work suggests that we can further extend our understanding of the underlying mechanisms behind postural control in quiet stance under varying stance conditions using the COP-VAF and provides a tool for quantifying future neurorehabilitative interventions.
Attention-deficit/hyperactivity disorder (ADHD) is a neurodevelopmental disorder that pervasively interferes with the lives of individuals starting in childhood. Objective. To address the subjectivity of current diagnostic approaches, many studies have been dedicated to efforts to identify the differences between ADHD and neurotypical (NT) individuals using EEG and continuous performance tests (CPT). Approach. In this study, we proposed EEG-based long short-term memory (LSTM) networks that utilize deep learning techniques with learning the cognitive state transition to discriminate between ADHD and NT children via EEG signal processing. A total of thirty neurotypical children and thirty ADHD children participated in CPT tests while being monitored with EEG. Several architectures of deep and machine learning were applied to three EEG data segments including resting state, cognitive execution, and a period containing a fusion of those. Main results. The experimental results indicated that EEG-based LSTM networks produced the best performance with an average accuracy of 90.50 ± 0.81 % in comparison with the deep neural networks, the convolutional neural networks, and the support vector machines with learning the cognitive state transition of EEG data. Novel observations of individual neural markers showed that the beta power activity of the O1 and O2 sites contributed the most to the classifications, subjects exhibited decreased beta power in the ADHD group, and had larger decreases during cognitive execution. Significance. These findings showed that the proposed EEG-based LSTM networks are capable of extracting the varied temporal characteristics of high-resolution electrophysiological signals to differentiate between ADHD and NT children, and brought a new insight to facilitate the diagnosis of ADHD. The registration numbers of the institutional review boards are 16MMHIS021 and EC1070401-F.
PurposeWayfinding, the process of determining and following a route between an origin and a destination, is an integral part of everyday tasks. The purpose of this study was to investigate the impact of glaucomatous visual field loss on wayfinding behavior using an immersive virtual reality (VR) environment.MethodsThis cross-sectional study included 31 glaucomatous patients and 20 healthy subjects without evidence of overall cognitive impairment. Wayfinding experiments were modeled after the Morris water maze navigation task and conducted in an immersive VR environment. Two rooms were built varying only in the complexity of the visual scene in order to promote allocentric-based (room A, with multiple visual cues) versus egocentric-based (room B, with single visual cue) spatial representations of the environment. Wayfinding tasks in each room consisted of revisiting previously visible targets that subsequently became invisible.ResultsFor room A, glaucoma patients spent on average 35.0 seconds to perform the wayfinding task, whereas healthy subjects spent an average of 24.4 seconds (P = 0.001). For room B, no statistically significant difference was seen on average time to complete the task (26.2 seconds versus 23.4 seconds, respectively; P = 0.514). For room A, each 1-dB worse binocular mean sensitivity was associated with 3.4% (P = 0.001) increase in time to complete the task.ConclusionsGlaucoma patients performed significantly worse on allocentric-based wayfinding tasks conducted in a VR environment, suggesting visual field loss may affect the construction of spatial cognitive maps relevant to successful wayfinding. VR environments may represent a useful approach for assessing functional vision endpoints for clinical trials of emerging therapies in ophthalmology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.