2013
DOI: 10.1007/s10044-013-0333-y
|View full text |Cite
|
Sign up to set email alerts
|

Generalized multi-scale stacked sequential learning for multi-class classification

Abstract: In many classification problems, neighbor data labels have inherent sequential relationships in spite of their values. Sequential learning algorithms take benefit of these relationships in order to improve generalization.In this paper, we revise the Multi-Scale Sequential Learning approach (MSSL) for applying it in the multi-class case (MMSSL). We introduce the Error-Correcting Output Codes (ECOC) framework in the MSSL classifiers and propose a formulation for calculating confidence maps from the margins of th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0
1

Year Published

2015
2015
2019
2019

Publication Types

Select...
4
3

Relationship

3
4

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 17 publications
0
8
0
1
Order By: Relevance
“…One way to address this difficulty is modifying the neighborhood function and replacing the base classifiers used in the original MS‐SSL scheme by others capable of dealing with data belonging to N classes. This adaptation, called MMSSL , is presented and applied for the resolution of several multi‐class sequential learning problems in . In this study, the authors concluded that MMSSL shows significant performance improvement compared with classical approaches.…”
Section: Stacking Variants and Related Approachesmentioning
confidence: 88%
“…One way to address this difficulty is modifying the neighborhood function and replacing the base classifiers used in the original MS‐SSL scheme by others capable of dealing with data belonging to N classes. This adaptation, called MMSSL , is presented and applied for the resolution of several multi‐class sequential learning problems in . In this study, the authors concluded that MMSSL shows significant performance improvement compared with classical approaches.…”
Section: Stacking Variants and Related Approachesmentioning
confidence: 88%
“…However, instead of just relying on individual descriptions, we exploit the confidences provided by the GMMs in the different cells and types of description altogether. This approach follows the Stacked Learning scheme (Cohen, 2005;Puertas et al, 2013), which involves training a new learning algorithm by combining previous predictions obtained with other learning algorithms. More precisely, each grid r is represented by a vector v r of confidences:…”
Section: Learning-based Fusion Approachmentioning
confidence: 99%
“…where C c refers to the confidence map for the class c. The vector a is the resulting feature vector for p. In effect, this decomposition creates a feature vector that is linear in size with the number of classes, e.g kn, as a consequence the framework is suited for problems with a relatively low number of classes, although it can be compressed when dealing with a large number of classes (Puertas et al, 2015). Notice that the points in P i are found as the points with an Euclidean distance from the query point p falling in the interval i.…”
Section: Multi-scale Sequential Stacked Classifiermentioning
confidence: 99%
“…In (Puertas et al, 2015) they present the Multi-class Multiscale Stacked Sequential Learning (MMSSL) framework. The idea is to stack subsequent classifiers and cumulatively extend the feature sets with filtered versions of each classifiers predictions.…”
Section: Introductionmentioning
confidence: 99%