2018
DOI: 10.1109/tnnls.2017.2691545
|View full text |Cite
|
Sign up to set email alerts
|

End-to-End Feature-Aware Label Space Encoding for Multilabel Classification With Many Classes

Abstract: To make the problem of multilabel classification with many classes more tractable, in recent years, academia has seen efforts devoted to performing label space dimension reduction (LSDR). Specifically, LSDR encodes high-dimensional label vectors into low-dimensional code vectors lying in a latent space, so as to train predictive models at much lower costs. With respect to the prediction, it performs classification for any unseen instance by recovering a label vector from its predicted code vector via a decodin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 25 publications
(10 citation statements)
references
References 34 publications
0
10
0
Order By: Relevance
“…Lin et al [12] proposed that Label Space Dimension Reduction (LSDR) is performed using a unique technique called End-to-End Feature-Aware Label Space Encoding (E 2FE).…”
Section: Literature Reviewmentioning
confidence: 99%
“…Lin et al [12] proposed that Label Space Dimension Reduction (LSDR) is performed using a unique technique called End-to-End Feature-Aware Label Space Encoding (E 2FE).…”
Section: Literature Reviewmentioning
confidence: 99%
“…Giunchiglia et al [5] recently proposed C-HMCNN(h) which exploits the hierarchy information in order to produce predictions coherent with the constraint. Lin et al [6] proposed E 2 F E which directly learns a feature-aware code matrix via jointly maximizing the recoverability of the label space and the predictability of the latent space, and gains performance improvements over other state-of-the-art LSDR methods.…”
Section: A Semantic Descriptionsmentioning
confidence: 99%
“…We suspect this is because our style transfer model is trained on art works with artistic styles such as blurring, dim and sharp contrasts, hence cannot use the same aesthetic scoring rules as ordinary images. The average value of the score distributions for all wallpaper images is mainly concentrated in [3,6]. After investigation, we suppose this is because NIMA is trained on the AVA dataset [43], and the images in this dataset are high-definition photos.…”
Section: Style Transfer and Aesthetic Evaluationmentioning
confidence: 99%
“…It is a supervised statistical technique which is used to check the inter-relation between a set of input data or variables. These two techniques help to pre-process data to transform the data in a model-acceptable format to get a desired result [22]. IV.…”
Section: Data Collection and Preprocessingmentioning
confidence: 99%