Brain-computer interfaces (BCIs) records brain activity using electroencephalogram (EEG) headsets in the form of EEG signals; these signals can be recorded, processed and classified into different hand movements, which can be used to control other IoT devices. Classification of hand movements will be one step closer to applying these algorithms in real-life situations using EEG headsets. This paper uses different feature extraction techniques and sophisticated machine learning algorithms to classify hand movements from EEG brain signals to control prosthetic hands for amputated persons. To achieve good classification accuracy, denoising and feature extraction of EEG signals is a significant step. We saw a considerable increase in all the machine learning models when the moving average filter was applied to the raw EEG data. Feature extraction techniques like a fast fourier transform (FFT) and continuous wave transform (CWT) were used in this study; three types of features were extracted, i.e., FFT Features, CWT Coefficients and CWT scalogram images. We trained and compared different machine learning (ML) models like logistic regression, random forest, k-nearest neighbors (KNN), light gradient boosting machine (GBM) and XG boost on FFT and CWT features and deep learning (DL) models like VGG-16, Dense-Net201 and ResNet50 trained on CWT scalogram images. XG Boost with FFT features gave the maximum accuracy of 88%.