The use of low-cost immersive virtual reality systems is rapidly expanding. Several studies started to analyse the accuracy of virtual reality tracking systems, but they did not consider in depth the effects of external interferences in the working area. In line with that, this study aimed at exploring the static-positional accuracy and the robustness to occlusions inside the capture volume of the SteamVR (1.0) tracking system. To do so, we ran 3 different tests in which we acquired the position of HTC Vive PRO Trackers (2018 version) on specific points of a grid drawn on the floor, in regular tracking conditions and with partial and total occlusions. The tracking system showed a high inter- and intra-rater reliability and detected a tilted surface with respect to the floor plane. Every acquisition was characterised by an initial random offset. We estimated an average accuracy of 0.5 ± 0.2 cm across the entire grid (XY-plane), noticing that the central points were more accurate (0.4 ± 0.1 cm) than the outer ones (0.6 ± 0.1 cm). For the Z-axis, the measurements showed greater variability and the accuracy was equal to 1.7 ± 1.2 cm. Occlusion response was tested using nonparametric Bland–Altman statistics, which highlighted the robustness of the tracking system. In conclusion, our results promote the SteamVR system for static measures in the clinical field. The computed error can be considered clinically irrelevant for exercises aimed at the rehabilitation of functional movements, whose several motor outcomes are generally measured on the scale of metres.
The Sit-to-Stand (STS) test is used in clinical practice as an indicator of lower-limb functionality decline, especially for older adults. Due to its high variability, there is no standard approach for categorising the STS movement and recognising its motion pattern. This paper presents a comparative analysis between visual assessments and an automated-software for the categorisation of STS, relying on registrations from a force plate. 5 participants (30 ± 6 years) took part in 2 different sessions of visual inspections on 200 STS movements under self-paced and controlled speed conditions. Assessors were asked to identify three specific STS events from the Ground Reaction Force, simultaneously with the software analysis: the start of the trunk movement (Initiation), the beginning of the stable upright stance (Standing) and the sitting movement (Sitting). The absolute agreement between the repeated raters' assessments as well as between the raters' and software's assessment in the first trial, were considered as indexes of human and software performance, respectively. No statistical differences between methods were found for the identification of the Initiation and the Sitting events at self-paced speed and for only the Sitting event at controlled speed. The estimated significant values of maximum discrepancy between visual and automated assessments were 0.200 [0.039; 0.361] s in unconstrained conditions and 0.340 [0.014; 0.666] s for standardised movements. The software assessments displayed an overall good agreement against visual evaluations of the Ground Reaction Force, relying, at the same time, on objective measures.
BACKGROUND: The Sit-to-Stand (STS) test is widely used in clinical practice as an indicator of lower-limb functionality decline, especially for older adults. Hitherto, due to its high variability, there is no standard approach for categorising the STS motion pattern, and the vision-based evaluation remains the most reliable method to evaluate people’ performance. This paper presents a comparative analysis between visual assessments and an automated-software approach for the categorisation of STS, relying on registrations from a force plate. METHODS: A group of 5 participants (30 ± 6 years) took part in 2 different sessions of visual inspections on 200 STS movements randomly extracted from a dataset of 742 acquisitions under self-paced and controlled speed conditions. Assessors were asked to identify three specific STS events from the Ground Reaction Force, simultaneously with the software analysis: the start of the trunk movement (Initiation), the beginning of the stable upright stance (Standing) and the sitting movement (Sitting). The Test-Retest Reliability between first and second visual evaluations was compared with the Inter-Rater Agreement between visual and software assessments, as indexes of human and software performance, respectively.RESULTS: No statistical differences between methods were found for the identification of the Initiation and the Sitting events at self-paced speed and for only the Sitting event at controlled speed. The estimated significant values of maximum discrepancy between visual and automated assessments were 0.200 s [0,039; 0.361] in unconstrained conditions and 0,340 s [0,014; 0,666] for standardised movements.CONCLUSIONS: The software assessments displayed an overall good agreement against visual evaluations of the Ground Reaction Force, relying, at the same time, on objective measures. In this sense, the proposed approach can provide robust and consistent data in the field of Big Data analytics, augmenting the performance of artificial intelligence methods for Human Activity Recognition tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.