2020
DOI: 10.1007/s13202-020-00839-y
|View full text |Cite
|
Sign up to set email alerts
|

A comparative study of heterogeneous ensemble methods for the identification of geological lithofacies

Abstract: Mudstone reservoirs demand accurate information about subsurface lithofacies for field development and production. Normally, quantitative lithofacies modeling is performed using well logs data to identify subsurface lithofacies. Well logs data, recorded from these unconventional mudstone formations, are complex in nature. Therefore, identification of lithofacies, using conventional interpretation techniques, is a challenging task. Several data-driven machine learning models have been proposed in the literature… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 46 publications
(24 citation statements)
references
References 51 publications
0
15
0
Order By: Relevance
“…Xie et al (2019) applied regularization on GTB and xgboosting and stacked the classifiers to improve the classification accuracy. Tewari and Dwivedi (2020) also showed that the heterogeneous ensemble methods, namely voting and stacking, could improve the prediction accuracy for mudstone lithofacies in a Kansas oil-field area. Ao et al (2019b) proposed a linear random forest (LRF) algorithm for better logging regression modeling with limited samples.…”
Section: Related Workmentioning
confidence: 93%
“…Xie et al (2019) applied regularization on GTB and xgboosting and stacked the classifiers to improve the classification accuracy. Tewari and Dwivedi (2020) also showed that the heterogeneous ensemble methods, namely voting and stacking, could improve the prediction accuracy for mudstone lithofacies in a Kansas oil-field area. Ao et al (2019b) proposed a linear random forest (LRF) algorithm for better logging regression modeling with limited samples.…”
Section: Related Workmentioning
confidence: 93%
“…Each base classifier independently gives a class label to a given test sample during the testing phase. The final grouping of the test sample is determined by the maximum number of times a particular class label is assigned to that test sample 61 . On the other hand, soft voting methods calculate the average probability of all classes, and the final prediction is made on the basis that which class is having the highest probability 60 .…”
Section: Materials and Methodologymentioning
confidence: 99%
“…These dynamic methods that combine outcomes from their base models produce superior results compared to their static counterparts that, for example, only take a majority vote over their participating base models. Two combination schemes can be used to merge results from different base models: majority voting (also known as hard voting) and average confidence probability (also known as soft voting) 32 . In addition, weighted voting and voting without weights are two ways of combining not only homogeneous but also heterogeneous base algorithm outputs 33 …”
Section: Literature Reviewmentioning
confidence: 99%