2019
DOI: 10.1016/j.solener.2019.02.060
|View full text |Cite
|
Sign up to set email alerts
|

Predicting the specific heat capacity of alumina/ethylene glycol nanofluids using support vector regression model optimized with Bayesian algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
36
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 126 publications
(37 citation statements)
references
References 47 publications
0
36
0
1
Order By: Relevance
“…Most recently, studies have focused on the use of artificial intelligence to predict this fluid property. Alade et al [82] developed a support vector regression (SVR) model optimized with (21)…”
Section: Nanofluid Specific Heat Capacity (C P )mentioning
confidence: 99%
“…Most recently, studies have focused on the use of artificial intelligence to predict this fluid property. Alade et al [82] developed a support vector regression (SVR) model optimized with (21)…”
Section: Nanofluid Specific Heat Capacity (C P )mentioning
confidence: 99%
“…The objective function converged at iterations of around 50 with a value of 0.0454 for the BO technique, indicating an excellent efficiency in searching for the optimal hyperparameter for the SVR model. This is because the BO technique makes full use of information from previous iterations to find the next possible combination of hyperparameters [41,42]. The results of the objective function and hypermeters after optimization (training process) are also listed in Table 5.…”
Section: Prediction Results Of Mass Changementioning
confidence: 99%
“…Bayesian optimization makes full use of all the information from previous iterations to find the next data point. Herein, the BO is able to find the global optimal solution with relatively fewer iterations [41,42].…”
Section: Implementation Of Optimization In Svr Modelsmentioning
confidence: 99%
“…Each model in ML techniques includes different hyperparameters that need to be set to obtain an excellent result. For example, parameters such as box constraint (C), kernel parameter (γ) and epsilon (ε) play an important role in the success of the SVM algorithm [28]. In another example, the proper selection of number of terminal nodes in the ensemble trees and the regularization parameter is important for stochastic gradient boosting algorithm performance.…”
Section: Hyperparameter Optimization With Bayesian Optimizationmentioning
confidence: 99%