2019
DOI: 10.1109/access.2019.2925300
|View full text |Cite
|
Sign up to set email alerts
|

Addressing the Overlapping Data Problem in Classification Using the One-vs-One Decomposition Strategy

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(10 citation statements)
references
References 38 publications
0
10
0
Order By: Relevance
“…The problem can be addressed by either utilizing classifiers that can reduce the negative impact of this overlap or features can be dropped which reduce the overlapping regions. But dropping columns can also result in information loss 49 . To deal with this overlap of classes, we arrived at the conclusion to categorize 31 labels to 12.…”
Section: Methodsmentioning
confidence: 99%
“…The problem can be addressed by either utilizing classifiers that can reduce the negative impact of this overlap or features can be dropped which reduce the overlapping regions. But dropping columns can also result in information loss 49 . To deal with this overlap of classes, we arrived at the conclusion to categorize 31 labels to 12.…”
Section: Methodsmentioning
confidence: 99%
“…OVO benefits the multi-class classification while increasing the separability of classes. Moreover, it was identified as an approach that highly benefits SVM as it provides robust results and superior performance [27]. Building and training of the SVM Models for linear, rbf, polynomial, sigmoid and precomputed kernels was done using the best combination of hyperparameters obtained through hyperparameter tuning.…”
Section: Lk(x Y)mentioning
confidence: 99%
“…The kernel functions of the support vector machines used a radial basis function with a gamma parameter ranging from 0.1, 1, 10, 100 and 1000. For each value of gamma, the SVM was reinitialised 20times to increase the chance of obtaining an optimal classi er [71,73]. For Classi cation Statistics the algorithm of the SVM was trained for every target case in the dataset (each species) [71].…”
Section: Classi Cation Statistics With and Without Hybrids' Presencementioning
confidence: 99%