Hate speech, characterized by language that incites discrimination, hostility, or violence against individuals or groups based on attributes such as race, religion, or gender, has become a critical issue on social media platforms. In Indonesia, unique linguistic complexities, such as slang, informal expressions, and code-switching, complicate its detection. This study evaluates the performance of Support Vector Machine (SVM), Naive Bayes, and IndoBERT models for multi-label hate speech detection on a dataset of 13,169 annotated Indonesian tweets. The results show that IndoBERT outperforms SVM and Naive Bayes across all metrics, achieving an accuracy of 93%, F1-score of 91%, precision of 91%, and recall of 91%. IndoBERT's contextual embeddings effectively capture nuanced relationships and complex linguistic patterns, offering superior performance in comparison to traditional methods. The study addresses dataset imbalance using BERT-based data augmentation, leading to significant metric improvements, particularly for SVM and Naive Bayes. Preprocessing steps proved essential in standardizing the dataset for effective model training. This research underscores IndoBERT's potential for advancing hate speech detection in non-English, low-resource languages. The findings contribute to the development of scalable, language-specific solutions for managing harmful online content, promoting safer and more inclusive digital environments.