Objective.
The gait phase and joint angle are two essential and complementary components of kinematics during normal walking, whose accurate prediction is critical for lower-limb rehabilitation, such as controlling the exoskeleton robots. Multi-modal signals have been used to promote the prediction performance of the gait phase or joint angle separately, but it is still few reports to examine how these signals can be used to predict both simultaneously. 
Approach.
To address this problem, we propose a new method named transferable multi-modal fusion (TMMF) to perform a continuous prediction of knee angles and corresponding gait phases by fusing multi-modal signals. Specifically, TMMF consists of a multi-modal signal fusion block, a time series feature extractor, a regressor, and a classifier. The multi-modal signal fusion block leverages the Maximum Mean Discrepancy to reduce the distribution discrepancy across different modals in the latent space, achieving the goal of transferable multi-modal fusion. Subsequently, by using the long short-term memory-based network, we obtain the feature representation from time series data to predict the knee angles and gait phases simultaneously. To validate our proposal, we design an experimental paradigm with random walking and resting to collect data containing multi-modal biomedical signals from electromyography, gyroscopes, and virtual reality.
Main results.
Comprehensive experiments on our constructed dataset demonstrate the effectiveness of the proposed method. TMMF achieves a root mean square error of 0.090±0.022 s in knee angle prediction and a precision of 83.7±7.7\% in gait phase prediction.
Significance.
We demonstrate the feasibility and validity of using TMMF to predict lower-limb kinematics continuously from multi-modal biomedical signals. This proposed method represents application potential in predicting the motor intent of patients with different pathologies.