There are several large datasets available captured from motion tracking systems which could be useful to train wearable human activity recognition (HAR) systems, if only their spatial data could be mapped into the equivalent inertial measurement unit (IMU) data that would be sensed on body. In this paper we describe a mapping from 3D Vicon motion tracking data to data collected from a BlueSense on-body IMU. We characterise the error incurred in order to discern the extent to which it is possible to generate useful training data for a wearable activity recognition system from data collected with a motion capture system. We analyze this by mapping Vicon motion tracking data to rotational velocity and linear acceleration at the head, and compare this to actual gyroscope and accelerometer data collected by an IMU mounted on the head. In a 15 minute dataset comprising three static activities -sitting, standing and lying down -we find that 95% of the reconstructed gyroscope data is within an error of [-7.25;+7.46] • −1 , while 95% of the reconstructed accelerometer data was contained within [-96.1;+72.9] • . However, when we introduce more movement by including data collected while walking this increases to [-19.0;+18.2] • −1 for the gyroscope and [-208;+186] • for the accelerometer. We conclude that generating accurate IMU data from motion capture datasets is possible and could be useful in providing larger volumes
Neural Architecture Search (NAS) has the potential to uncover more performant networks for wearable activity recognition, but a naive evaluation of the search space is computationally expensive. We introduce neural regression methods for predicting the converged performance of a Deep Neural Network (DNN) using validation performance in early epochs and topological and computational statistics. Our approach shows a significant improvement in predicting converged testing performance. We apply this to the optimisation of the convolutional feature extractor of an LSTM recurrent network using NAS with deep Q-learning, optimising the kernel size, number of kernels, number of layers and the connections between layers, allowing for arbitrary skip connections and dimensionality reduction with pooling layers. We find architectures which achieve up to 4% better F1 score on the recognition of gestures in the Opportunity dataset than our implementation of the state of the art model DeepConvLSTM, while reducing the search time by >90% over a random search. This opens the way to rapidly search for well performing dataset-specific architectures. CCS Concepts• Computing methodologies → Neural networks.
Freezing of Gait (FoG) is a common disabling motor symptom in Parkinson's Disease (PD). Auditory cueing provided when FoG is detected can help mitigate the condition, for which earables are potentially well suited as they are capable of motion sensing and audio feedback. However, there are no studies so far on FoG detection at the ear. Immersive Virtual Reality (VR) combined with video-based full-body motion capture has been increasingly used to run FoG studies in the medical community. While there are motion capture datasets collected in such an environment, there are no datasets collected from IMU placed at the ear. In this paper, we show how to transfer such motion capture datasets to IMU domain and evaluate the capability of FoG detection from ear position in an immersive VR environment. Using a dataset of 6 PD patients, we compare machine learning-based FoG detection applied to the motion capture data and the virtual IMU. We have achieved an average sensitivity of 80.3% and an average specificity of 87.6% on FoG detection using the virtual earable IMU, which indicates the potential of FoG detection at the ear. This study is a step toward user-studies with earables in the VR setup, prior to conducting research in over-ground walking and everyday life. CCS CONCEPTS• Applied computing → Consumer health.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.