2022
DOI: 10.3390/ma15134721
|View full text |Cite
|
Sign up to set email alerts
|

Research on Hyperparameter Optimization of Concrete Slump Prediction Model Based on Response Surface Method

Abstract: In this paper, eight variables of cement, blast furnace slag, fly ash, water, superplasticizer, coarse aggregate, fine aggregate and flow are used as network input and slump is used as network output to construct a back-propagation (BP) neural network. On this basis, the learning rate, momentum factor, number of hidden nodes and number of iterations are used as hyperparameters to construct 2-layer and 3-layer neural networks respectively. Finally, the response surface method (RSM) is used to optimize the param… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 21 publications
0
3
0
Order By: Relevance
“…Configuration of algorithms and baselines. We compare PGS against baselines including COMs (Trabucco et al 2021), NEMO (Fu and Levine 2021), ROMA (Yu et al 2021a), BDI (Chen et al 2022), BONET (Krishnamoorthy, Mashkaria, and Grover 2022b) and other baselines from design-bench (Trabucco et al 2022). We took the results for all baselines from their respective papers.…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Configuration of algorithms and baselines. We compare PGS against baselines including COMs (Trabucco et al 2021), NEMO (Fu and Levine 2021), ROMA (Yu et al 2021a), BDI (Chen et al 2022), BONET (Krishnamoorthy, Mashkaria, and Grover 2022b) and other baselines from design-bench (Trabucco et al 2022). We took the results for all baselines from their respective papers.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…We took the results for all baselines from their respective papers. Since NEMO and ROMA don't report normalized scores, and their original papers lack a few tasks, we take their results from the stateof-the-art BDI paper (Chen et al 2022). This is reasonable because all baselines use the same design-bench benchmark and evaluation methodology.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Finally, the predictive slump model, trained with 19,536 data instances, achieved R 2 = 0.84 and RMSE = 11.3 mm during 10-fold-cross-validation. In the study outlined in [35], modeling with a neural network on 103 data instances using a 70-30% train-test split yielded an R 2 of 0.95 and an RMSE of 2.781 cm (equivalent to 27.81 mm). We consider that the results are not directly comparable due to significant differences in the quantity of data.…”
Section: Discussionmentioning
confidence: 99%