2022
DOI: 10.1016/j.nimb.2022.06.001
|View full text |Cite
|
Sign up to set email alerts
|

Model calibration of the liquid mercury spallation target using evolutionary neural networks and sparse polynomial expansions

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 33 publications
0
2
0
Order By: Relevance
“…Without helium gas injection, the equation of state (EOS) material model was used to simulate the mercury in the target vessel and had good predictions of the vessel strain response [4]. A further machine learning study of these model parameters [5] illustrated how to utilize the modern computational method to finely tune the EOS model parameters for a better strain response prediction. Work presented in this paper focused on identifying the major parameters in the two-phase mercury model and their reasonable ranges for machine learning application.…”
Section: Introductionmentioning
confidence: 99%
“…Without helium gas injection, the equation of state (EOS) material model was used to simulate the mercury in the target vessel and had good predictions of the vessel strain response [4]. A further machine learning study of these model parameters [5] illustrated how to utilize the modern computational method to finely tune the EOS model parameters for a better strain response prediction. Work presented in this paper focused on identifying the major parameters in the two-phase mercury model and their reasonable ranges for machine learning application.…”
Section: Introductionmentioning
confidence: 99%
“…However, we assume that neither φpxq nor the gradient ∇φpxq is readily available, and that the function φpxq is only accessible via a noisy approximate F pxq " φpxq` pxq, where represents a highly oscillating noise that perturbs the true objective function. The scenario can be found in various applications, for instance, machine learning [19,32,40], model calibration [31] and experimental design [11,26], where the loss landscape of objective functions is non-convex and highly rugged and complex, with its main geometric features being concealed under the small-scale, deceptive fluctuations. In such cases, conventional gradient-based algorithms are not effective because F has many local minima that would trap the optimizers.…”
mentioning
confidence: 99%