2011
DOI: 10.1117/12.882016
|View full text |Cite
|
Sign up to set email alerts
|

Damage classification using Adaboost machine learning for structural health monitoring

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 27 publications
0
3
0
Order By: Relevance
“…AdaBoost builds the model sequence using a different method than XGBoost, an improved version of Gradient Boosting with various enhancements and improvements. The particular challenge and the application's needs will determine which solution is best [23].…”
Section: ) Support Vector Machine (Svm)mentioning
confidence: 99%
“…AdaBoost builds the model sequence using a different method than XGBoost, an improved version of Gradient Boosting with various enhancements and improvements. The particular challenge and the application's needs will determine which solution is best [23].…”
Section: ) Support Vector Machine (Svm)mentioning
confidence: 99%
“…These characteristics are then used in AdaBoost in order to classify the damage level. AdaBoost is used in Kim and Philen 14 to distinguish between the two most common types of damage of metallic structures, cracks and corrosion, using four different signal processing methods, in time and frequency domains.…”
Section: Introductionmentioning
confidence: 99%
“…Nonetheless, SVM also has several drawbacks in applications, such as the classification performance of SVM sometimes is very sensitive to the kernel selection, therefore selecting an appropriate kernel for SVM becomes a problem [17]. Additionally, Adaboost is a well-known ML method for classification problems, which has been successfully applied to many application fields [18][19][20]. Nonetheless, it also has the central disadvantage that it is very sensitive to noise, thus it tends to produced over-fitting results during the model training period if the training samples consist of many noises [21].…”
Section: Introductionmentioning
confidence: 99%