Machine learning is crucial in categorizing data into specific classes based on their features. However, challenges emerge, especially in classification, when dealing with imbalanced datasets. An imbalanced dataset occurs when there is a disproportionate number of samples across different classes. It leads to a machine learning model's bias towards the majority class and poor recognition of minority classes, often resulting in notable prediction inaccuracies for those less represented classes. This research proposes a cascade and parallel architecture in the training process to enhance accuracy and speed compared to non-cascade and sequential. This research will evaluate the performance of the SVM and Random Forest methods. Our findings reveal that employing the Random Forest method, configured with 100 trees, substantially enhances classification accuracy by 4.72%, elevating it from 58.87% to 63.59% compared to non-cascade classifiers. Furthermore, adopting the Message Passing Interface for Python (MPI4Py) for parallel processing across multiple cores or nodes demonstrates a remarkable increase in training speed. Specifically, parallel processing was found to accelerate the training process by up to 4.35 times, reducing the duration from 1725.86 milliseconds to a mere 396.54 milliseconds. These results highlight the advantages of integrating parallel processing with a cascade architecture in machine learning models, particularly in addressing the challenges associated with imbalanced datasets. This research demonstrates the potential for substantial improvements in classification tasks' accuracy and efficiency.