2019
DOI: 10.1155/2019/7482138
|View full text |Cite
|
Sign up to set email alerts
|

EBOC: Ensemble-Based Ordinal Classification in Transportation

Abstract: Learning the latent patterns of historical data in an efficient way to model the behaviour of a system is a major need for making right decisions. For this purpose, machine learning solution has already begun its promising marks in transportation as well as in many areas such as marketing, finance, education, and health. However, many classification algorithms in the literature assume that the target attribute values in the datasets are unordered, so they lose inherent order between the class values. To overco… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 20 publications
(11 citation statements)
references
References 20 publications
0
8
0
Order By: Relevance
“…Ensemble methods attempt to overcome the bias or variance effects of individual classifiers by combining several of them together [ 32 ], thus achieving better performance [ 33 , 34 ]. Despite the fact that ensemble methods for nominal classification have received considerable attention in the literature and have been chosen in preference to single individual learning algorithms for many classification tasks, the use of ordinal classification in ensemble algorithms has rarely been discussed [ 35 ]. In the literature, the most widely used ensemble methods are categorized into four techniques: bagging, boosting, stacking, and voting [ 35 , 36 ].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Ensemble methods attempt to overcome the bias or variance effects of individual classifiers by combining several of them together [ 32 ], thus achieving better performance [ 33 , 34 ]. Despite the fact that ensemble methods for nominal classification have received considerable attention in the literature and have been chosen in preference to single individual learning algorithms for many classification tasks, the use of ordinal classification in ensemble algorithms has rarely been discussed [ 35 ]. In the literature, the most widely used ensemble methods are categorized into four techniques: bagging, boosting, stacking, and voting [ 35 , 36 ].…”
Section: Methodsmentioning
confidence: 99%
“…Despite the fact that ensemble methods for nominal classification have received considerable attention in the literature and have been chosen in preference to single individual learning algorithms for many classification tasks, the use of ordinal classification in ensemble algorithms has rarely been discussed [ 35 ]. In the literature, the most widely used ensemble methods are categorized into four techniques: bagging, boosting, stacking, and voting [ 35 , 36 ]. In this research, we suggest integrating the objective-based information gain into ensemble methods to enable the use of ordinal classification in ensemble algorithms.…”
Section: Methodsmentioning
confidence: 99%
“…Meanwhile, ordinal classification, which assumes that the class attributes of experimental datasets have an inherent order, has been recently preferred in ML. This paradigm has been utilized in software engineering (Diamantopoulos et al, 2021; Fontana & Zanoni, 2017; Kıyak et al, 2019) as well as in many areas, such as healthcare (Durán‐Rosal et al, 2021; Liu et al, 2020; Pal et al, 2018), banking (Manthoulis et al, 2020), meteorology (Guijo‐Rubio et al, 2018), and transportation (Kıyak et al, 2019; Yıldırım et al, 2019) for a classification task. Researchers (Kıyak et al, 2019) proposed an ordinal classification approach using RF, SVM, NB, and k‐nearest neighbour (kNN) algorithms as base learners for software bug prediction in their study.…”
Section: Related Workmentioning
confidence: 99%
“…AdaBoost. AdaBoost trains multiple single split decision trees consecutively to convert weak learners to strong ones by reweighting variables in the training set [13].…”
Section: Extra Tree Regression (Etr)mentioning
confidence: 99%