2022
DOI: 10.1038/s41598-022-24037-4
|View full text |Cite
|
Sign up to set email alerts
|

Source discrimination of mine water based on the random forest method

Abstract: Machine learning is one of the widely used techniques to pattern recognition. Use of the machine learning tools is becoming a more accessible approach for predictive model development in preventing engineering disaster. The objective of the research is to for estimation of water source using the machine learning tools. Random forest classification is a popular machine learning method for developing prediction models in many research settings. The type of mine water in the Pingdingshan coalfield is classified i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 21 publications
0
6
0
Order By: Relevance
“…Five common machine learning methods, namely Random Forest (RF), Support Vector Machine (SVM), eXtreme Gradient Boosting (XGB), Logistic regression (LR), and Back propagation (BP) neural network were used to model the data in this study (19)(20)(21)(22)(23).…”
Section: Machine Learning Methodsmentioning
confidence: 99%
“…Five common machine learning methods, namely Random Forest (RF), Support Vector Machine (SVM), eXtreme Gradient Boosting (XGB), Logistic regression (LR), and Back propagation (BP) neural network were used to model the data in this study (19)(20)(21)(22)(23).…”
Section: Machine Learning Methodsmentioning
confidence: 99%
“…Subsequently, n classification features (n≤N) are randomly selected from the total N features within each sample set to facilitate full node splitting of decision trees, thus generating M decision trees. Consequently, the category of new samples is determined through a majority voting process among the outcomes obtained from all M decision trees 15 .…”
Section: Methodsmentioning
confidence: 99%
“…Subsequently, n classification features (n ≤ N) are randomly selected from the total N features within each sample set to facilitate full node splitting of decision trees, thus generating M decision trees. Consequently, the category of new samples is determined through a majority voting process among the outcomes obtained from all M decision trees [28].…”
Section: Random Forest Modelmentioning
confidence: 99%
“…category of new samples is determined through a majority voting process among the outcomes obtained from all M decision trees [28].…”
Section: Random Forest Modelmentioning
confidence: 99%