Type-1 diabetes (T1D) patients must carefully monitor their insulin doses to avoid serious health complications. An effective regimen can be designed by predicting accurate blood glucose levels (BGLs). Several physiological and datadriven models for BGL prediction have been designed. However, less is known on the combination of different traditional machine learning (ML) algorithms for BGL prediction. Furthermore, most of the available models are patient-specific. This research aims to evaluate several traditional ML algorithms and their novel combinations for generalized BGL prediction. The data of forty T1D patients were generated using the Automated Insulin Dosage Advisor (AIDA) simulator. The twenty-four hour timeseries contained samples at fifteen-minute intervals. The training data was obtained by joining eighty percent of each patient's time-series, and the remaining twenty percent time-series was joined to obtain the testing data. The models were trained using multiple patients' data so that they could make predictions for multiple patients. The traditional non-ensemble algorithms: linear regression (LR), support vector regression (SVR), knearest neighbors (KNN), multi-layer perceptron (MLP), decision tree (DCT), and extra tree (EXT) were evaluated for forecasting BGLs of multiple patients. A new ensemble, called the Tree-SVR model, was developed. The BGL predictions from the DCT and the EXT models were fed as features into the SVR model to obtain the final outcome. The ensemble approach used in this research was based on the stacking technique. The Tree-SVR model outperformed the non-ensemble models (LR, SVR, KNN, MLP, DCT, and EXT) and other novel Tree variants (Tree-LR, Tree-MLP, and Tree-KNN). This research highlights the utility of designing ensembles using traditional ML algorithms for generalized BGL prediction.