2019
DOI: 10.1109/tit.2019.2893916
|View full text |Cite
|
Sign up to set email alerts
|

Data-Dependent Generalization Bounds for Multi-Class Classification

Abstract: In this paper, we study data-dependent generalization error bounds exhibiting a mild dependency on the number of classes, making them suitable for multi-class learning with a large number of label classes. The bounds generally hold for empirical multi-class risk minimization algorithms using an arbitrary norm as regularizer. Key to our analysis are new structural results for multiclass Gaussian complexities and empirical ∞-norm covering numbers, which exploit the Lipschitz continuity of the loss function with … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
37
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 38 publications
(37 citation statements)
references
References 51 publications
0
37
0
Order By: Relevance
“…The precision is the ratio of the true positive results to all positive results [49], and the mean value of the precision of each sample label is the macro-precision. The recall represents the number of positive results that were correctly classified as positive in the experimental result [48], and the mean value of the recall of each sample label is the macro-recall. The macro-F1 score is the harmonic mean of the macro-precision and macro-recall.…”
Section: Comparison Of the Classification Results Of The Four Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The precision is the ratio of the true positive results to all positive results [49], and the mean value of the precision of each sample label is the macro-precision. The recall represents the number of positive results that were correctly classified as positive in the experimental result [48], and the mean value of the recall of each sample label is the macro-recall. The macro-F1 score is the harmonic mean of the macro-precision and macro-recall.…”
Section: Comparison Of the Classification Results Of The Four Methodsmentioning
confidence: 99%
“…The experimental results of the four methods are summarized in Table 5 ; we used four evaluation indicators (accuracy, macro-Precision, macro-recall, and macro-F1 score) to assess the classification performance of the models. The accuracy represents the ratio of the correctly classified samples to the total samples [ 48 ]. The precision is the ratio of the true positive results to all positive results [ 49 ], and the mean value of the precision of each sample label is the macro-precision.…”
Section: Resultsmentioning
confidence: 99%
“…For multi-class classification, the theory of extreme classifcation has been developed in the recent work [25]. In similar context, the behavior of tail-labels for flat and classification with taxonomies has been studied in the previous work [4,2,5,3,6].…”
Section: Predictive Performance 5 Related Workmentioning
confidence: 99%
“…But these systems are completely different. A multiclass classification system aims to classify documents into a single class over multiple (i.e., more than two) classes [ 3 ]. On the other hand, multilabel systems assign one or more classes to a particular document [ 4 ].…”
Section: Introductionmentioning
confidence: 99%
“…In multilabel systems, the more classes there are dependent on each other, the more efficient it is to assign multiple classes to a particular document [ 4 ], whereas, in a multiclass problem, each document must preserve unique features of a particular class rather than share them with other classes. It is hard to ensure label independence in that case, which leads to a negative impact on the performance side of the classification system [ 3 ]. Eventually, all these problems may be partially or completely solvable by enhancing feature representation methods.…”
Section: Introductionmentioning
confidence: 99%