In the field of machine learning, the preparation of data is a pivotal step in optimizing model performance. This paper delves into the crucial role of data cleaning and transformation, with a particular emphasis on resampling techniques tailored for addressing imbalanced datasets. By emphasizing the significance of tailored data preparation methodologies, this study underscores the role of resampling techniques in optimizing model performance, especially when dealing with imbalanced datasets. Through an exploration of both undersampling and oversampling methods, the study delves into their nuanced impacts on classification performance and explores the potential trade-offs inherent in each approach. Focusing on the domain of credit card default prediction, the research leverages the UCI Credit Card dataset to provide a comprehensive analysis. The results demonstrate that NearMiss outperformed other undersampling techniques across all classifiers and evaluation metrics. Similarly, K-MeansSMOTE emerged as the top-performing oversampling technique across all classifiers and evaluation metrics. Among the techniques investigated in the study, K-MeansSMOTE oversampling yielded the highest performance accuracy. The findings from this paper enhance our understanding of the performance of different resampling techniques and contribute to the scholarship on handling imbalanced datasets. The results show the pros and cons of different resampling methods used with different machine learning algorithms. They also show how important customized methods are for getting accurate predictions. While offering valuable insights, the study acknowledges the necessity for further research to refine and generalize these techniques across diverse domains and real-world applications, thereby contributing to the broader landscape of machine learning methodologies.