Electromyogram (EMG) signals are very important in recognizing hand and finger movements and controlling prosthesis movements. In recent years, EMG signals have become popular in designing and controlling human-machine interactions and rehabilitation equipment such as robotic prostheses. This study aims to develop an innovative model based on EMG signal in the classification of basic hand grip movements that can improve prosthetic hand movements for individuals who have lost some limbs for various reasons. The proposed approach consists of Time Domain Descriptors (TDD), convolutional neural network (CNN), Long short-term memory (LSTM) techniques, Selection Minimum Redundancy Maximum Relationship (MRMR), and Support Vector Machine (SVM). First, it is applied to TDD, CNN, and LSTM models to extract features from EMG signals. It is then applied as input to MRMR to select the most effective features from the obtained features. Finally, SVM is applied to classify different hand grip movements. The effectiveness of the proposed model was evaluated with the EMG hand gestures dataset in the publicly available UCI repository. In experimental studies, a 95.63% accuracy rate was achieved in the first two of the five subjects and 100% accuracy in the remaining three subjects. As a result, it achieved an average specificity of 99.66% and an accuracy of 98.34% for five subjects. In addition, the experimental results of the proposed hybrid model show that when compared to the most advanced methods using the same dataset, the model achieves a higher classification rate and produces superior results compared to several previous studies. Therefore, this study reveals that it can be used as a low-cost control unit that can accurately classify hand grips from EMG signals with high accuracy.