2019
DOI: 10.1609/aaai.v33i01.33013304
|View full text |Cite
|
Sign up to set email alerts
|

Two-Stage Label Embedding via Neural Factorization Machine for Multi-Label Classification

Abstract: Label embedding has been widely used as a method to exploit label dependency with dimension reduction in multilabel classification tasks. However, existing embedding methods intend to extract label correlations directly, and thus they might be easily trapped by complex label hierarchies. To tackle this issue, we propose a novel Two-Stage Label Embedding (TSLE) paradigm that involves Neural Factorization Machine (NFM) to jointly project features and labels into a latent space. In encoding phase, we introduce a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 29 publications
(9 citation statements)
references
References 14 publications
0
9
0
Order By: Relevance
“…We use three datasets: CIFAR-100 [13,21], ImageNet-Subset [7], and TinyImageNet [35] in our experiments. For a fair comparison with baseline class-incremental learning methods [1,11,17,37,43,49] in the FCIL setting, we follow the same protocols proposed by [37,49] to set incremental tasks, utilize the identical class order generated from iCaRL [37], and employ the same backbone (i.e., ResNet-18 [15]) as the classification model [2]. The SGD optimizer whose learning rate is 2.0 is used to train all models.…”
Section: Implementation Detailsmentioning
confidence: 99%
“…We use three datasets: CIFAR-100 [13,21], ImageNet-Subset [7], and TinyImageNet [35] in our experiments. For a fair comparison with baseline class-incremental learning methods [1,11,17,37,43,49] in the FCIL setting, we follow the same protocols proposed by [37,49] to set incremental tasks, utilize the identical class order generated from iCaRL [37], and employ the same backbone (i.e., ResNet-18 [15]) as the classification model [2]. The SGD optimizer whose learning rate is 2.0 is used to train all models.…”
Section: Implementation Detailsmentioning
confidence: 99%
“…In latent embedding learning methods the inputs and outputs are projected into a shared latent space [13,14,15]. An effective recent method is the Multivariate Probit Variational AutoEncoder (MPVAE) [6].…”
Section: Related Workmentioning
confidence: 99%
“…The second group deals with latent embedding, in which they learn one shared latent space representing both input features and output labels (Bhatia et al 2015a;Yeh et al 2017;Tang et al 2018;Chen et al 2019a). Most recently, Bai, Kong, and Gomes (2020) propose MPVAE: it learns VAE-based probabilistic latent spaces for both labels and features and aligns the latent representations using the Kullback-Leibler divergence.…”
Section: Related Workmentioning
confidence: 99%