2018
DOI: 10.48550/arxiv.1807.00653
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Nesterov-aided Stochastic Gradient Methods using Laplace Approximation for Bayesian Design Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…Moreover, surrogate models and approximations of the marginal likelihood could be combined with the full-model LMIS estimator using a control variate or multilevel Monte Carlo approach, to further reduce the number of full model evaluations while preserving consistency. Another clear extension is to pair LMIS estimators with optimization methods designed for stochastic objectives (e.g., [20,40,32,8]) to facilitate search over the design space.…”
Section: Discussionmentioning
confidence: 99%
“…Moreover, surrogate models and approximations of the marginal likelihood could be combined with the full-model LMIS estimator using a control variate or multilevel Monte Carlo approach, to further reduce the number of full model evaluations while preserving consistency. Another clear extension is to pair LMIS estimators with optimization methods designed for stochastic objectives (e.g., [20,40,32,8]) to facilitate search over the design space.…”
Section: Discussionmentioning
confidence: 99%
“…The model for the average work per sample of g , denoted by W (g ), is assumed to follow (11). Then, for a fixed κ ∈ (0, 1), the level L and M can be estimated from the bias constraint (36), given Assumption 1 and work model (11).…”
Section: Choice Of Mldlmc Parametersmentioning
confidence: 99%
“…The model for the average work per sample of g , denoted by W (g ), is assumed to follow (11). Then, for a fixed κ ∈ (0, 1), the level L and M can be estimated from the bias constraint (36), given Assumption 1 and work model (11). Given L and κ, minimizing the work (44), subject to the bias (36) and statistical (39) constraint with probability 1 − α, gives us the number of samples on level for the MLDLMC estimator (cf.…”
Section: Choice Of Mldlmc Parametersmentioning
confidence: 99%
See 2 more Smart Citations