2021
DOI: 10.1016/j.apt.2021.09.020
|View full text |Cite
|
Sign up to set email alerts
|

Modeling of particle sizes for industrial HPGR products by a unique explainable AI tool- A “Conscious Lab” development

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

1
19
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 36 publications
(20 citation statements)
references
References 63 publications
1
19
0
Order By: Relevance
“…Given a training dataset of n samples T ={( x 1 , y 1 ), ( x 2 , y 2 ),…, ( x n , y n )} x i ∈ ℝ m y i ∈ ℝ , the objective function can be defined by where measures the difference between the target y i and the prediction and f t denotes the prediction score of t th tree [ 53 ]. Ω( f t ) represents the regularization term, which control the model's complexity to avoid overfitting [ 50 ]. The estimated loss function can be computed based on Taylor expansion of the objective function: where denotes each sample's first derivative and denotes each sample's second derivative.…”
Section: Methodsmentioning
confidence: 99%
“…Given a training dataset of n samples T ={( x 1 , y 1 ), ( x 2 , y 2 ),…, ( x n , y n )} x i ∈ ℝ m y i ∈ ℝ , the objective function can be defined by where measures the difference between the target y i and the prediction and f t denotes the prediction score of t th tree [ 53 ]. Ω( f t ) represents the regularization term, which control the model's complexity to avoid overfitting [ 50 ]. The estimated loss function can be computed based on Taylor expansion of the objective function: where denotes each sample's first derivative and denotes each sample's second derivative.…”
Section: Methodsmentioning
confidence: 99%
“…Extreme Gradient Boosting (XGBoost), proposed by Chen and Guestrin 49 , is an efficient and scalable ensemble algorithm based on gradient boosted trees 16 , 50 . XGBoost has been used in a wide range of engineering fields, resulting in outstanding performance due to the advantages of parallel tree boosting and using various regularization techniques 13 , 51 , 52 . XGBoost is a stable algorithm with low bias and variance, handling outliers 24 , 53 .…”
Section: Methodsmentioning
confidence: 99%
“…Mathematically speaking, RF generates an ensemble of decision trees. Using these trees, the final output of an input feature vector is computed as follows: where is the result of the th tree’s estimation 13 , 64 66 .…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…First of all, the GBDT algorithm only employs a first-order Taylor expansion, whereas XGBoost augments the loss function with a second-order Taylor expansion. Secondly, the objective function uses normalization to prevent overfitting and reduce the method’s complexity [46], [47]. Third, XGBoost is extremely adaptable, allowing users to create their own optimization objectives and evaluation criteria.…”
Section: Methodsmentioning
confidence: 99%