2022
DOI: 10.1061/(asce)st.1943-541x.0003332
|View full text |Cite
|
Sign up to set email alerts
|

High-Dimensional Reliability Analysis with Error-Guided Active-Learning Probabilistic Support Vector Machine: Application to Wind-Reliability Analysis of Transmission Towers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 16 publications
(6 citation statements)
references
References 45 publications
0
6
0
Order By: Relevance
“…In this study, the extreme gradient boosting (XGBoost) model is used for the classification task. Compared to the other machine learning models commonly applied in the field of reliability analysis such as logistic regression, 41,42 and support vector machines, 43 XGBoost provides many benefits. These advantages include the handling of high dimensional data, supporting regularization to avoid overfitting, and the ability to perform cross‐validation at each iteration to constantly monitor the performance of the model 44 .…”
Section: Methodsmentioning
confidence: 99%
“…In this study, the extreme gradient boosting (XGBoost) model is used for the classification task. Compared to the other machine learning models commonly applied in the field of reliability analysis such as logistic regression, 41,42 and support vector machines, 43 XGBoost provides many benefits. These advantages include the handling of high dimensional data, supporting regularization to avoid overfitting, and the ability to perform cross‐validation at each iteration to constantly monitor the performance of the model 44 .…”
Section: Methodsmentioning
confidence: 99%
“…In the feasibility stage, new sample points can be selected by maximizing the feasibility learning function as shown in Equation ( 16) 32 :…”
Section: Feasibility Learning Function Maximizationmentioning
confidence: 99%
“…In the feasibility stage, new sample points can be selected by maximizing the feasibility learning function as shown in Equation () 32 : MaxLx=Pwrong classx*dtrue¯x where Pwrong classx is the probability of wrong classification, dtrue¯x is the normalized distance to nearest neighbor. Pwrong classx is defined as: Pwrong classx=minP1xP+1x where P1x and P+1x are calculated based on the SVM prediction ψtruêx, as shown below: P1x=12+ftruêx,ifψtruêx>0 P1x=11+expftruê…”
Section: Proposed Dynamic Switching Sbo Frameworkmentioning
confidence: 99%
“…The instance that is closest to the hyperplane is called the support vector. According to Song, the advantages of the SVM algorithm are that it provides the smallest generalization error of other methods and can solve problems with high dimensions and limited samples [24]. The weakness of SVM is its complicated computation for high-dimensional data [25].…”
Section: Support Vector Machine (Svm)mentioning
confidence: 99%