“…Deep learning models are full of hyperparameters in terms of architecture and training parameters (such as the number or type of layers and the learning rate). Their optimization by most of the reviewed papers are based on a trial-anderror approach [29], [15], [16], [17], [18], [22], [23], [30], [31], [32], [51], [33]. However, this approach can be timeconsuming and error-prone due to a lack of understanding of the impacts of parameters.…”