There has been an accelerated surge to utilize the deep neural network for decoding central and peripheral activations of the human's nervous system to boost up the spatiotemporal resolution of neural interfaces used in neurorobotics. Such algorithmic solutions are motivated for use in human-centered robotic systems, such as neurorehabilitation, prosthetics, and exoskeletons. These methods are proved to achieve higher accuracy on individual data when compared with the conventional machine learning methods but are also challenged by their assumption of having access to massive training samples. Objective: In this letter, we propose Dilated Efficient CapsNet to improve the predictive performance when the available individual data is very minimum and not enough to train an individualized network for controlling a personalized robotic system. Method: We proposed the concept of transfer learning using a new design of the dilated efficient capsular neural network to relax the need of having access to massive individual data and utilize the field knowledge which can be learned from a group of participants. In addition, instead of using complete sEMG signals, we only use the transient phase, reducing the volume of training samples to 20\% of the original and maximizing the agility.
Results: In experiments, we validate our model performance with various amounts of injected personalized training data (25%-100% of transient phase) that is segmented once by time and once by repetition. The results of this paper support the use of transfer learning using a dilated capsular neural network and show that with the use of such a model, the knowledge domain learned on a small number of subjects can be utilized to minimize the need for new data of new subjects while focusing only on the transient phase of contraction (which is a challenging neural interfacing problem).