2019
DOI: 10.1007/s10994-019-05837-8
|View full text |Cite
|
Sign up to set email alerts
|

Multi-label optimal margin distribution machine

Abstract: Multi-label support vector machine (Rank-SVM) is a classic and effective algorithm for multi-label classification. The pivotal idea is to maximize the minimum margin of label pairs, which is extended from SVM. However, recent studies disclosed that maximizing the minimum margin does not necessarily lead to better generalization performance, and instead, it is more crucial to optimize the margin distribution. Inspired by this idea, in this paper, we first introduce margin distribution to multi-label learning an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 37 publications
(24 citation statements)
references
References 34 publications
0
24
0
Order By: Relevance
“…We used EfficientNet-B0 trained on ImageNet as a baseline CNN. EfficientNet 35 has been demonstrated to achieve high accuracy on ImageNet and provide an order of magnitude higher efficiency than previous models, such as ResNet and Xception. In SimCLR framework (see ESI, † Fig.…”
Section: Resultsmentioning
confidence: 99%
“…We used EfficientNet-B0 trained on ImageNet as a baseline CNN. EfficientNet 35 has been demonstrated to achieve high accuracy on ImageNet and provide an order of magnitude higher efficiency than previous models, such as ResNet and Xception. In SimCLR framework (see ESI, † Fig.…”
Section: Resultsmentioning
confidence: 99%
“…Technically, focusing on the widely used kernel-based algorithms [14,15,13], we present the generalization analyses based on Rademacher complexity [16] and the vector-contraction inequality [17], following recent work [13]. For the Fisher consistency, we consider more general reweighted univariate losses, which naturally extends the results in prior work [9,8].…”
Section: Algorithmmentioning
confidence: 99%
“…In the following, we consider the kernel-based learning algorithms which have been widely used in practice [3,28,29,14,15] and in theory [13] in MLC. Besides, our following analyses can be extended to other forms of hypothesis class, such as neural networks [30].…”
Section: Learning Algorithmsmentioning
confidence: 99%
“…Various types of algorithm adaptation approach have been introduced based on k-NN [7,33], decision tree [34], regression [35], Support Vector Machine (SVM) [36], and feed-forward neural networks [37,38]. In order to achieve high classification performance, recent studies consider label distributions and their correlations in a label learning process [39,40]. These algorithms, however, cannot cope with the situation where new label information is sequentially provided.…”
Section: Multi-label Classificationmentioning
confidence: 99%