Sentiment analysis is a highly valuable tool, particularly in the realm of social media, as it enables us to understand the public's opinions regarding specific products or topics. However, analyzing short and unstructured texts like tweets can present significant challenges. This paper explores conventional Machine Learning (ML) approaches like Naive Bayes, Logistic Regression, and Support Vector Machine to analyze sentiment and compares them against Bidirectional Encoder Representations from Transformer (BERT). Moreover, we suggest a new preprocessing technique for sentiment analysis to enhance the effectiveness of these methods. Our findings demonstrate noteworthy enhancements in the performance of conventional ML models. Interestingly, our study reveals that BERT outperforms all aforementioned models, yielding an accuracy of about 94%, though incurring a high computational cost. Additionally, Logistic Regression performs well with a 90.35% accuracy rate. With respect to feature extraction, we showcase that combining unigram and bigram words provides a more thorough comprehension of negation, as opposed to solely relying on unigrams. Finally, we propose an approach for managing emoticons and emojis that has proven to be useful in the fields of sentiment analysis and sarcasm interpretation.