2022
DOI: 10.1016/j.ress.2022.108635
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive learning for reliability analysis using Support Vector Machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 28 publications
(5 citation statements)
references
References 47 publications
0
5
0
Order By: Relevance
“…The SVM belongs to a family of generalised linear classifiers, such as backpropagation neural network, RBF, learning vector quantisation and Kohonen. This family of classifiers can both minimise the empirical classification error and maximise the geometric margin 34 . The SVM employs structural risk minimisation to find the best hyperplane that separates two different classes in the input space of the SVM.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The SVM belongs to a family of generalised linear classifiers, such as backpropagation neural network, RBF, learning vector quantisation and Kohonen. This family of classifiers can both minimise the empirical classification error and maximise the geometric margin 34 . The SVM employs structural risk minimisation to find the best hyperplane that separates two different classes in the input space of the SVM.…”
Section: Resultsmentioning
confidence: 99%
“…This family of classifiers can both minimise the empirical classification error and maximise the geometric margin. 34 The SVM employs structural risk minimisation to find the best hyperplane that separates two different classes in the input space of the SVM. This algorithm obtains high accuracy in robust classification.…”
Section: Svm Analysismentioning
confidence: 99%
“…The learning rate is maintained for each network weight (parameter) and is adapted separately as learning progresses. The method calculates the individual adaptive learning rates (Pepper et al, 2022), (Kotsyuba et al, 2022) for different parameters of the first and second moment estimates of the gradient.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Surrogate model 26 is an effective method to improve the computational efficiency of MCS, and its accuracy and efficiency depend on the model type and sampling strategy. Commonly used methods include the Kriging function, [6][7][8] support vector machine, 27,28 and neural network. 29,30 There is no clear conclusion on which method is better than others for all problems.…”
Section: Introductionmentioning
confidence: 99%