Motion intention detection is fundamental in the implementation of human-machine interfaces applied to assistive robots. In this paper, multiple machine learning techniques have been explored for creating upper limb motion prediction models, which generally depend on three factors: the signals collected from the user (such as kinematic or physiological), the extracted features and the selected algorithm. We explore the use of different features extracted from various signals when used to train multiple algorithms for the prediction of elbow flexion angle trajectories. The accuracy of the prediction was evaluated based on the mean velocity and peak amplitude of the trajectory, which are sufficient to fully define it. Results show that prediction accuracy when using solely physiological signals is low, however, when kinematic signals are included, it is largely improved. This suggests kinematic signals provide a reliable source of information for predicting elbow trajectories. Different models were trained using 10 algorithms. Regularization algorithms performed well in all conditions, whereas neural networks performed better when the most important features are selected. The extensive analysis provided in this study can be consulted to aid in the development of accurate upper limb motion intention detection models.