Sentiment analysis is crucial in natural language processing for identifying emotional nuances in text. Analyzing sentiment in natural language text is essential for discerning emotional subtleties. However, this task becomes especially intricate when dealing with code-mixed texts, such as Amharic-English, which exhibit language diversity and frequent code-switching, particularly in social media exchanges. In this investigation, we proposed employing CNN, LSTM, BiLSTM, and CNN-BiLSTM models to address sentiment classification in such code-mixed texts. Our approach involves leveraging deep learning techniques and various preprocessing methods, including language detection and code-switching integration. We conducted four experiments utilizing Count Vectorizer and TF-IDF. Our assessment reveals that incorporating language detection and code switching significantly increases model accuracy. Specifically, the average accuracy of the CNN model increased from 82.004–84.458%, that of the LSTM model increased from 79.716–81.234%, that of the BiLSTM model increased from 81.586–83.402%, and that of the CNN-BiLSTM model increased from 82.128–84.765%. Our study emphasizes the imperative of addressing language diversity and code-switching to achieve dependable sentiment analysis in multilingual environments. Furthermore, this study provides valuable insights for future research, highlighting the importance of language-specific preprocessing techniques for optimizing model performance across diverse linguistic contexts.