Many upper-limb prostheses lack proper wrist rotation functionality, leading to users performing poor compensatory strategies, leading to overuse or abandonment. In this study, we investigate the validity of creating and implementing a data-driven predictive control strategy in object grasping tasks performed in virtual reality. We propose the idea of using gazecentered vision to predict the wrist rotations of a user and implement a user study to investigate the impact of using this predictive control. We demonstrate that using this vision-based predictive system leads to a decrease in compensatory movement in the shoulder, as well as task completion time. We discuss the cases in which the virtual prosthesis with the predictive model implemented did and did not make a physical improvement in various arm movements. We also discuss the cognitive value in implementing such predictive control strategies into prosthetic controllers. We find that gaze-centered vision provides information about the intent of the user when performing object reaching and that the performance of prosthetic hands improves greatly when wrist prediction is implemented. Lastly, we address the limitations of this study in the context of both the study itself as well as any future physical implementations.
We anticipate wide adoption of wrist and forearm electomyographic (EMG) interface devices worn daily by the same user. This presents unique challenges that are not yet well addressed in the EMG literature, such as adapting for session-specific differences while learning a longer-term model of the specific user. In this manuscript we present two contributions toward this goal. First, we present the MiSDIREKt (Multi-Session Dynamic Interaction Recordings of EMG and Kinematics) dataset acquired using a novel hardware design. A single participant performed four kinds of hand interaction tasks in virtual reality for 43 distinct sessions over 12 days, totaling 814 min. Second, we analyze this data using a non-linear encoder-decoder for dimensionality reduction in gesture classification. We find that an architecture which recalibrates with a small amount of single session data performs at an accuracy of 79.5% on that session, as opposed to architectures which learn solely from the single session (49.6%) or learn only from the training data (55.2%).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.