“…As for the parameters tested, for the DNN, we explored the number of neurons in the rst dense layer (16,32,64), learning rates (0.001, 0.01, 0.05), dropout rates (0.1, 0.3, 0.5), batch sizes (8,16,32), epochs (30,70), activation functions (sigmoid, tanh, relu, and swish) and optimizers (Adaptive Moment Estimation (Adam), Stochastic Gradient Descent and Adamax, a variant of Adam based on the in nity norm). For the GB classi er, we adjusted several hyperparameters: learning rate (0.1, 0.2, 0.3), max depth (5), max features (sqrt, log2), min samples leaf (10, 20), min samples split (20,30), n_estimators (200, 300), and subsample (0.8, 0.9).…”