2016
DOI: 10.1016/j.patcog.2016.04.008
|View full text |Cite
|
Sign up to set email alerts
|

A non-parametric approach to extending generic binary classifiers for multi-classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(12 citation statements)
references
References 33 publications
0
12
0
Order By: Relevance
“…Although these strategies are effective, they encounter two main obstacles. First, the resulting complicated optimization problem generated by some of these methods leads to time‐consuming training procedure (Galar et al, ; Liu et al, ; Santhanam et al, ) particularly in the case where DL models are exploited as base classifiers. Second, the noisy high‐dimensional feature space of MI tasks hinders the effective use of the OVA and OVO‐based strategies.…”
Section: Introductionmentioning
confidence: 99%
“…Although these strategies are effective, they encounter two main obstacles. First, the resulting complicated optimization problem generated by some of these methods leads to time‐consuming training procedure (Galar et al, ; Liu et al, ; Santhanam et al, ) particularly in the case where DL models are exploited as base classifiers. Second, the noisy high‐dimensional feature space of MI tasks hinders the effective use of the OVA and OVO‐based strategies.…”
Section: Introductionmentioning
confidence: 99%
“…The trip purposes of the respondents' trips were identified by the trained CHMM model trained first, and then compared with the real trip purposes reported by the respondents, so as the accuracy of the proposed method were verified. Since the label variables were added into the questionnaire, the reliability evaluation of the model could be conducted according to the multi-classification problems [35]. Each activity in the trip-chains was regarded as a sample for metrics calculation, so that more calculation samples were used during the evaluation.…”
Section: B Model Validation Resultsmentioning
confidence: 99%
“…However, the ratio of competent to non-competent classifiers becomes 1:1 for data with four classes and monotonically decreases in favor of non-competent classifiers as the number of classes increases. In these imbalanced multiclass settings, a more sophisticated approach using some form of weighted voting should be used instead [60,62].…”
Section: Cassini Simulationsmentioning
confidence: 99%