2021
DOI: 10.3390/app112411710
|View full text |Cite
|
Sign up to set email alerts
|

Bituminous Mixtures Experimental Data Modeling Using a Hyperparameters-Optimized Machine Learning Approach

Abstract: This study introduces a machine learning approach based on Artificial Neural Networks (ANNs) for the prediction of Marshall test results, stiffness modulus and air voids data of different bituminous mixtures for road pavements. A novel approach for an objective and semi-automatic identification of the optimal ANN’s structure, defined by the so-called hyperparameters, has been introduced and discussed. Mechanical and volumetric data were obtained by conducting laboratory tests on 320 Marshall specimens, and the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
9

Relationship

2
7

Authors

Journals

citations
Cited by 20 publications
(6 citation statements)
references
References 66 publications
0
6
0
Order By: Relevance
“…13. Matteo Miani, et al [76] Used a Bayesian optimization algorithm for optimizing hyperparameters for prediction of Marshall test results, stiffness modulus, and air voids data regarding various bituminous mixtures for road pavements, with the use of an ML approach based on (ANNs). 14.…”
Section: Applications Used Hyperparameters Optimization Algorithmsmentioning
confidence: 99%
See 1 more Smart Citation
“…13. Matteo Miani, et al [76] Used a Bayesian optimization algorithm for optimizing hyperparameters for prediction of Marshall test results, stiffness modulus, and air voids data regarding various bituminous mixtures for road pavements, with the use of an ML approach based on (ANNs). 14.…”
Section: Applications Used Hyperparameters Optimization Algorithmsmentioning
confidence: 99%
“…In [37], [72], [76], [77], [79], [82] overfitting problem is showed, in [37], [82] dropout rate is most important hyperparameter to prevent this issue, which the dropout and number of epochs enable model to reduce overfitting, while in [76], [79] L2 regularization is used to reduce the chance of model overfitting in this case the learning rate hyperparameter is affected . Lastly in [77] [72] they overcome the overfitting without relying on the hyperparameter in [77] use early stopping technique and in another one fit the neuron number manually.…”
mentioning
confidence: 99%
“…However, there are many optimization algorithms available nowadays that can reduce the time spent in searching for the best model hyperparameters. Among them, the Bayesian optimization (BO) algorithm [44] has found considerable success mainly due to the work of Snoek et al [45]. The goal of the optimization process is to minimize a given objective function f (z) for z = z p , p ∈ {0, .…”
Section: Bayesian Optimizationmentioning
confidence: 99%
“…The standard practice of splitting the available data set into two random subsets of training and testing may result in biased performance evaluations due to the different distribution of data within such splits, along with the risk of missing some relevant trends in training data [40]. These effects are particularly marked when the data set is relatively small.…”
Section: K-fold Cross Validationmentioning
confidence: 99%