2020
DOI: 10.1016/j.petrol.2019.106598
|View full text |Cite
|
Sign up to set email alerts
|

A comparative analysis of bubble point pressure prediction using advanced machine learning algorithms and classical correlations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 44 publications
(20 citation statements)
references
References 22 publications
0
19
0
1
Order By: Relevance
“…In this study, the following five machine learning algorithms are implemented for dead oil viscosity prediction and results are compared across the board: XGBoost [86], LightGBM [87], random forest [88], an artificial neural network algorithm [89] and SVR [90]. Likewise, due to the superiority and robustness of SuperLearner that is proven in other fields [78,80,[91][92][93], for the first time, this method is also applied, and the output is compared to five other algorithms.…”
Section: Model Developmentmentioning
confidence: 99%
“…In this study, the following five machine learning algorithms are implemented for dead oil viscosity prediction and results are compared across the board: XGBoost [86], LightGBM [87], random forest [88], an artificial neural network algorithm [89] and SVR [90]. Likewise, due to the superiority and robustness of SuperLearner that is proven in other fields [78,80,[91][92][93], for the first time, this method is also applied, and the output is compared to five other algorithms.…”
Section: Model Developmentmentioning
confidence: 99%
“…Jiang [27] et al also proposed a stacked ensemble model, which used RF, ERT, XGBoost, LightGBM, RNN, bidirectional RNN, LSTM, and GRU as basic learners, and used logistic regression as the second-level metalearner. In different application scenarios, the stacked models constructed by researchers such as Li [28], Yang [29], and Feng [30] et al had good results, suggesting an idea for establishing stacked models to predict pavement performance in the field of urban highway tunnel pavement performance prediction.…”
Section: Introductionmentioning
confidence: 99%
“…Another new gradient learning framework built up upon the idea of the decision tree is LightGBM 48 . The salient features of LightGBM which dominates XGBoost are consuming less memory, utilizing a leaf-wise growth approach with depth restrictions, and benefiting from a histogram-based algorithm that expedites the training process 49 . Using the aforementioned histogram algorithm, LightGBM discretizes continuous floating-point eigenvalues into k bins, hence leading to building a k-width histogram.…”
Section: Models Implementationmentioning
confidence: 99%
“…The downside of leaf orientation is growing deeper decision trees which unavoidably results in overfitting. However, LightGBM precludes this overfitting while furnishing high efficiency by applying a maximum depth limit to the leaf top 48 , 49 .
Figure 2 Leaf-wise tree growth in LightGBM.
…”
Section: Models Implementationmentioning
confidence: 99%