2001
DOI: 10.1007/3-540-44816-0_12
|View full text |Cite
|
Sign up to set email alerts
|

An Evaluation of Grading Classifiers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
43
0
2

Year Published

2002
2002
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 85 publications
(45 citation statements)
references
References 9 publications
0
43
0
2
Order By: Relevance
“…We then summarize the results of several recent studies in stacking [8,11,12,10,13]. Motivated by these, we introduce a modified stacking approach based on classification via linear regression [11].…”
Section: Stackingmentioning
confidence: 99%
See 1 more Smart Citation
“…We then summarize the results of several recent studies in stacking [8,11,12,10,13]. Motivated by these, we introduce a modified stacking approach based on classification via linear regression [11].…”
Section: Stackingmentioning
confidence: 99%
“…Seewald and Fürnkranz [10] propose a method for combining classifiers called grading that learns a meta-level classifier for each base-level classifier. The metalevel classifier predicts whether the base-level classifier is to be trusted (i.e., whether its prediction will be correct).…”
Section: Recent Advancesmentioning
confidence: 99%
“…In the Grading method as proposed by [6], when there is a tie in the likelihood of an instance belonging to different classes, the meta-classifier checks which of the class has higher prior probability and accordingly makes a decision. We suggest an alternative scheme in which instead of completely ignoring the predictions of some of the base classifiers (the classifiers with higher probability of being wrong than correct), the grader should assign a delta (close to zero) probability to all such classifiers being correct.…”
Section: Tie Breaking For Gradingmentioning
confidence: 99%
“…Grading. The defining feature of methods in this category (also known as referee method [5,6]) is that, instead of directly finding the relationship between the predictions of the base classifier and the actual class (as in stacking); the meta-classifier grades the base-classifiers, and selects either a single or subset of base-classifier(s) which are likely to be correct for the given test instance. The intuition behind grading is that in large datasets where there may be multiple functions defining the relationship between predictor and response variables, it is important to choose the correct function for any given test instance.…”
Section: Introductionmentioning
confidence: 99%
“…In contrast, delegation produces models which are completely and exclusively defined in terms of the original attributes and class. Arbitrating (Ortega et al, 2001) and grading (Seewald & Fürnkranz, 2001) are also related to delegation, but both learn external referees to assess the probability of error of each classifier from the pool of base classifiers, and their areas of expertise. No new attributes are generated.…”
Section: Introductionmentioning
confidence: 99%