Finite-Element-Model Updating Using Computional Intelligence Techniques 2010
DOI: 10.1007/978-1-84996-323-7_1
|View full text |Cite
|
Sign up to set email alerts
|

Introduction to Finite-element-model Updating

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 88 publications
0
1
0
Order By: Relevance
“…Among several training algorithms, the scaled conjugate gradient was selected due to its computationally efficient. 64 The Sigmoid activation function was employed for the hidden layer, while the softmax activation function was utilized for the output layer. Bayesian hyperparameter optimization was adapted to find the best possible number of neurons in the hidden layer and the network's learning rate.…”
Section: Development Of Ann Modelmentioning
confidence: 99%
“…Among several training algorithms, the scaled conjugate gradient was selected due to its computationally efficient. 64 The Sigmoid activation function was employed for the hidden layer, while the softmax activation function was utilized for the output layer. Bayesian hyperparameter optimization was adapted to find the best possible number of neurons in the hidden layer and the network's learning rate.…”
Section: Development Of Ann Modelmentioning
confidence: 99%