Sentiment analysis plays a pivotal role in comprehending public sentiment, notably within digital communication, where copious amounts of textual data are generated daily. This study delves into the efficacy of sentiment classification models, namely the Naive Bayes Classifier (NBC) and Support Vector Machine (SVM), within the imbalanced datasets commonly encountered in sentiment analysis tasks. Employing a comparative analysis methodology, a dataset comprising robot hotel reviews from online platforms is the basis for evaluation. Both NBC and SVM models undergo training and assessment, with and without the Synthetic Minority Over-sampling Technique (SMOTE), to rectify the class imbalance. Performance evaluation relies on critical metrics, including accuracy, recall, precision, f-measure, and Area Under Curve (AUC) to gauge model effectiveness. Findings demonstrate SVM's superiority over NBC in terms of accuracy (SVM: 76.88%, NBC: 67.43%), precision (SVM: 92.03%, NBC: 86.87%), recall (SVM: 58.88%, NBC: 41.00%), f-measure (SVM: 71.78%, NBC: 55.63%), and AUC (SVM: 0.907, NBC: 0.961). Incorporating SMOTE significantly enhances both models' performance, particularly in addressing class imbalance concerns. Although NBC exhibits a more balanced performance across precision and recall metrics, SVM demonstrates heightened accuracy and predictive capability in sentiment classification tasks. These findings underscore the pivotal role of algorithm selection and preprocessing techniques in optimizing sentiment analysis performance, thereby providing invaluable insights for practitioners and researchers alike.