2021
DOI: 10.1002/sam.11507
|View full text |Cite
|
Sign up to set email alerts
|

A comparison of Gaussian processes and neural networks for computer model emulation and calibration

Abstract: The Department of Energy relies on complex physics simulations for prediction in domains like cosmology, nuclear theory, and materials science. These simulations are often extremely computationally intensive, with some requiring days or weeks for a single simulation. In order to assure their accuracy, these models are calibrated against observational data in order to estimate inputs and systematic biases. Because of their great computational complexity, this process typically requires the construction of an em… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 24 publications
0
4
0
Order By: Relevance
“…Such a model works with minimal data and can be improved iteratively by adding new data points. [ 59 ] The resulting model can predict an expected outcome at every data point in the parameter space, usually in a smooth hyperplane. Furthermore, the model come with a standard deviation that can be evaluated at every point in the input parameter space, giving a metric for the uncertainty of the model, which can be used for the exploration of formulations where the model has only very little knowledge.…”
Section: Resultsmentioning
confidence: 99%
“…Such a model works with minimal data and can be improved iteratively by adding new data points. [ 59 ] The resulting model can predict an expected outcome at every data point in the parameter space, usually in a smooth hyperplane. Furthermore, the model come with a standard deviation that can be evaluated at every point in the input parameter space, giving a metric for the uncertainty of the model, which can be used for the exploration of formulations where the model has only very little knowledge.…”
Section: Resultsmentioning
confidence: 99%
“…[48][49][50] Furthermore, GPs have shown better performance for small sized datasets compared with models like Bayesian neural networks (BNNs) or neural network ensembles, which can also provide uncertainty estimates. 51,52 This makes GPs well suited for predictions for adsorption like problems where generating large dataset is expensive. These approaches are already gaining popularity in the molecular simulation space.…”
Section: Active Learning As An Alternate Strategy For Surrogate Modelsmentioning
confidence: 99%
“…In hyperparameter optimization utilizing the BO algorithm, in the beginning, it is necessary to define an initial set of configurations, X 0 (input value determined using the gray box principle described in the next section), and their associated function (in our case, the machine flux and speed value): D 0 = {X 0 , y 0 }. Then, in a loop, we updated the GP model using the Bayes rule; see [53,54]. Subsequently, a new hyperparameter configuration was chosen through the optimization of an auxiliary function known as the acquisition function.…”
Section: Hyperparameter Bayesian Optimizationmentioning
confidence: 99%