2022
DOI: 10.2514/1.j061643
|View full text |Cite
|
Sign up to set email alerts
|

Comment on “Novel Approach for Selecting Low-Fidelity Scale Factor in Multifidelity Metamodeling”

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 3 publications
0
2
0
Order By: Relevance
“…The initial learning rate is set to 10 −3 , and it decays at a rate of 0.1 every 1000 epochs. The maximum epoch should be selected carefully because a model that does not fully converge can make significantly different predictions than a converged model [40]. For all models (AE, VAE, and β-VAE), the maximum number of epochs is set to 3000 because additional epochs did not result in a significant change, and this value guaranteed sufficient convergence in terms of the loss function.…”
Section: Training Detailsmentioning
confidence: 99%
See 1 more Smart Citation
“…The initial learning rate is set to 10 −3 , and it decays at a rate of 0.1 every 1000 epochs. The maximum epoch should be selected carefully because a model that does not fully converge can make significantly different predictions than a converged model [40]. For all models (AE, VAE, and β-VAE), the maximum number of epochs is set to 3000 because additional epochs did not result in a significant change, and this value guaranteed sufficient convergence in terms of the loss function.…”
Section: Training Detailsmentioning
confidence: 99%
“…2, AE, VAE, and β-VAE, are trained to investigate their differences in terms of DR for transonic flow. In particular, several β-VAE models are trained (β ∈[10,20,30,40, 50, 100, 150, 200, 500, 750, 1000, 2000, 3000, 4000]) to investigate the effects of the β value (technically speaking, the VAE can be regarded as the β-VAE when β has a value of 1). The dimension of the latent space is set to 16, which is considered sufficient for encoding the training data generated from the 2D parameter domain (Ma and AoA).…”
mentioning
confidence: 99%