“…XGBoost's optimization hyperparameters were learning rate (0.001, 0.01, 0.1), maximum tree depth (3,5,7,9), and number of trees used by the model (100, 200, 300, 400, 500). The hyperparameters for KF-GRU and KF-LSTM models included sequence structure in the Kalman filter, target quantity in the model (1), number of feature columns(4), hidden layer size (1,2,3,4), encoder-decoder layer numbers (1,2,3,4), dropout rate regularization size (0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8), encoder and decoder size (1,2,3,4), batch size (2,4,8,16,32,64,128), and learning rate (0.1, 0.01, 0.001). In the next section, we will discuss the performance of each model on the training and test sets.…”