2020
DOI: 10.1080/08839514.2020.1824091
|View full text |Cite
|
Sign up to set email alerts
|

Predicting Calorific Value of Thar Lignite Deposit: A Comparison between Back-propagation Neural Networks (BPNN), Gradient Boosting Trees (GBT), and Multiple Linear Regression (MLR)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 23 publications
0
5
0
Order By: Relevance
“…A BP neural network (BPNN) is a multilayer forward network based on error backpropagation, consisting primarily of an input layer, several hidden layers, and an output layer. It has strong nonlinear fitting capabilities and has some practicality in classifying, identifying, and calculating risk values [ 4 , 5 ]. BPNN can classify any complex pattern and has excellent multidimensional function mapping capabilities.…”
Section: Introductionmentioning
confidence: 99%
“…A BP neural network (BPNN) is a multilayer forward network based on error backpropagation, consisting primarily of an input layer, several hidden layers, and an output layer. It has strong nonlinear fitting capabilities and has some practicality in classifying, identifying, and calculating risk values [ 4 , 5 ]. BPNN can classify any complex pattern and has excellent multidimensional function mapping capabilities.…”
Section: Introductionmentioning
confidence: 99%
“…The Thar Coalfield in Pakistan ranks as the seventh-largest coal field globally (Ahmed et al, 2020). The Thar Coalfield consists…”
Section: Methodsmentioning
confidence: 99%
“…FIGURE 2Geographical map of Block IX of the Thar Coalfield region [modified after(Ahmed et al, 2020)]. …”
mentioning
confidence: 99%
“…The application of smart tree-based intelligent methods is limited in GCV prediction. Tree-based intelligent methods, namely regression tree, random forest, gradient boosting tree, and XGBoost were applied by Bui et al [ 8 ], Matin and Chelgani [ 26 ], Ahmed et al [ 27 ], and Chelgani [ 28 ], respectively ( Table 2 ). They applied these techniques to different datasets with different sample sizes.…”
Section: Introductionmentioning
confidence: 99%
“…TOB is optimized using memetic firefly optimizer, generalized reduced gradient, and evolutionary algorithm. 2020 Ahmed et al [ 27 ] C M , V M , C A , FC CCV BPNN GBT MLR R 2 : 0.89 R 2 : 0.91 R 2 : 0.80 R 2 8039 Yes 1. GBT performed better than BPNN, and MLR.…”
Section: Introductionmentioning
confidence: 99%