2020
DOI: 10.1007/s11063-020-10362-0
|View full text |Cite
|
Sign up to set email alerts
|

Detecting Ordinal Subcascades

Abstract: Ordinal classifier cascades are constrained by a hypothesised order of the semantic class labels of a dataset. This order determines the overall structure of the decision regions in feature space. Assuming the correct order on these class labels will allow a high generalisation performance, while an incorrect one will lead to diminished results. In this way ordinal classifier systems can facilitate explorative data analysis allowing to screen for potential candidate orders of the class labels. Previously, we h… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 50 publications
0
1
0
Order By: Relevance
“…Therefore, one can simply choose one of the existing divideand-conquer methods, such the error-correcting output codes (Dietterich and Bakiri, 1991), includ-ing, e.g., the one-versus-one and one-versus-all approaches. Moreover, one can also apply our proposed method in form of cascaded classification architectures (Frank and Hall, 2001), if it is possible to detect an ordinal class structure in the current classification task, as recently proposed, for instance, in (Bellmann and Schwenker, 2020;Lausser et al, 2020).…”
Section: Discussionmentioning
confidence: 99%
“…Therefore, one can simply choose one of the existing divideand-conquer methods, such the error-correcting output codes (Dietterich and Bakiri, 1991), includ-ing, e.g., the one-versus-one and one-versus-all approaches. Moreover, one can also apply our proposed method in form of cascaded classification architectures (Frank and Hall, 2001), if it is possible to detect an ordinal class structure in the current classification task, as recently proposed, for instance, in (Bellmann and Schwenker, 2020;Lausser et al, 2020).…”
Section: Discussionmentioning
confidence: 99%