2023
DOI: 10.1038/s41598-023-30037-9
|View full text |Cite
|
Sign up to set email alerts
|

Machine learning intelligence to assess the shear capacity of corroded reinforced concrete beams

Abstract: The ability of machine learning (ML) techniques to forecast the shear strength of corroded reinforced concrete beams (CRCBs) is examined in the present study. These ML techniques include artificial neural networks (ANN), adaptive-neuro fuzzy inference systems (ANFIS), decision tree (DT) and extreme gradient boosting (XGBoost). A thorough databank with 140 data points about the shear capacity of CRCBs with various degrees of corrosion was compiled after a review of the literature. The inputs parameters of the i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1
1

Relationship

2
7

Authors

Journals

citations
Cited by 14 publications
(4 citation statements)
references
References 47 publications
0
4
0
Order By: Relevance
“…SHAP (SHapley Additive exPlanations) is a unified approach to explain the output of any ML model. It uses Shapley values, a well-established mathematical concept from cooperative game theory, to explain the output of a model by assigning a contribution to each feature 64 .…”
Section: Resultsmentioning
confidence: 99%
“…SHAP (SHapley Additive exPlanations) is a unified approach to explain the output of any ML model. It uses Shapley values, a well-established mathematical concept from cooperative game theory, to explain the output of a model by assigning a contribution to each feature 64 .…”
Section: Resultsmentioning
confidence: 99%
“…The most common data normalization range used in the prediction of models are -1 to 1, 0 to 1, or 0.1 to 0.9, etc. 42 . In this article, the data values were normalized in the range of -1 to 1 using Eq.…”
Section: Methodology Of Studymentioning
confidence: 99%
“…The algorithm works by recursively partitioning the dataset into increasingly homogeneous subsets based on the values of input features [35]. Starting at the root node, which represents the entire dataset, the algorithm searches for the best feature to split the data, aiming to create subsets that are as pure as possible in terms of class distribution [36]. This process continues down the tree, with each internal node representing a decision point based on a specific feature.…”
Section: E K-nearest Neighbor (Knn) Classifiermentioning
confidence: 99%