2016
DOI: 10.48550/arxiv.1608.06608
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Infinite-Label Learning with Semantic Output Codes

Abstract: We formalize a new statistical machine learning paradigm, called infinite-label learning, to annotate a data point with more than one relevant labels from a candidate set which pools both the finite labels observed at training and a potentially infinite number of previously unseen labels. The infinite-label learning fundamentally expands the scope of conventional multi-label learning and better meets the practical requirements in various real-world applications, such as image tagging, ads-query association, an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(6 citation statements)
references
References 21 publications
0
6
0
Order By: Relevance
“…8.5 ± 0.6 30.7 ± 0.6 6.1 ± 0.6 50.0 ± 0.0 10.9 ± 0.9 5.4 ± 0.0 13.9 ± 0.0 5.1 ± 0.0 12.5 ± 0.0 7.2 ± 0.0 DSP [16] 15.9 ± 1. 5 of all five evaluation metrics under all three evaluation scenarios. Such results demonstrate that simply combining semantic representations of co-occurred multiple labels into one collective representation leads to catastrophic loss of semantic information, which is mainly responsible for the poor performance of DSP and ConSE in multi-label recognition.…”
Section: Results On Comparison To State-of-the-art Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…8.5 ± 0.6 30.7 ± 0.6 6.1 ± 0.6 50.0 ± 0.0 10.9 ± 0.9 5.4 ± 0.0 13.9 ± 0.0 5.1 ± 0.0 12.5 ± 0.0 7.2 ± 0.0 DSP [16] 15.9 ± 1. 5 of all five evaluation metrics under all three evaluation scenarios. Such results demonstrate that simply combining semantic representations of co-occurred multiple labels into one collective representation leads to catastrophic loss of semantic information, which is mainly responsible for the poor performance of DSP and ConSE in multi-label recognition.…”
Section: Results On Comparison To State-of-the-art Methodsmentioning
confidence: 99%
“…Zero-shot learning (ZSL) has attracted much attention in recent years and provides a promising technique for recognizing a large number of classes without the need of the training data concerning all the classes. Very recently, [5] have formally shown that it is feasible to predict a collection of infinite unseen labels with a classifier learned on training data concerning only a number of labels in this collection or a subset of this collection, where multi-label ZSL is a special case in this socalled "infinite-label learning" paradigm. According to a ZSL taxonomy [15], existing ZSL approaches are divided into three categories, namely, direct mapping [16,17,18,19,20], model parameter transfer [21,22] and joint latent space learning [23,24,25,26,27,28,15,29].…”
Section: Multi-label Zero-shot Learningmentioning
confidence: 99%
See 3 more Smart Citations