This study assessed the performance of single and modified algorithms based on machine learning and deep learning for wastewater treatment process. More specifically, this study adopted support vector machine (SVM), random forest (RF), and artificial neural network (ANN) for machine learning as well as long short-term memory (LSTM) for deep learning. The performance of these (single) algorithms were compared with that of modified ones processed through hyperparameter tuning, ensemble learning (only for machine learning), and multi-layer stacking (i.e., two layers of LSTM units). The daily effluent of wastewater treatment process observed between 2017 and 2022 in the Cheong-Ju National Industrial Complex was used as input to all tested algorithms, which was evaluated with respect to mean squared error. For the model performance evaluation, discharge and biochemical oxygen demand are selected as dependent variables out of nine measured parameters. Results showed that the performance of any machine learning algorithms was superior to their competitor LSTM. This is mainly attributed to a small amount of input data provided to the LSTM algorithm and unstable effluent wastewater characteristics. Meanwhile, hyperparameter tuning improved the performance of all tested algorithms. However, ensemble learning for machine learning and two-layer stacking for LSTM generally resulted in performance degradation as compared to that of single algorithms, regardless of dependent variables. Therefore, this calls for a careful design and evaluation of modified algorithms, specifically for model architecture and performance improvement processes.