2016
DOI: 10.3390/e18110405
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Multi-Label Feature Selection Using Entropy-Based Label Selection

Abstract: Multi-label feature selection is designed to select a subset of features according to their importance to multiple labels. This task can be achieved by ranking the dependencies of features and selecting the features with the highest rankings. In a multi-label feature selection problem, the algorithm may be faced with a dataset containing a large number of labels. Because the computational cost of multi-label feature selection increases according to the number of labels, the algorithm may suffer from a degradat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
7
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 48 publications
0
7
0
Order By: Relevance
“…Equation 10indicates that the score value of each feature is influenced by the entropy value of each label, and this fact implies Proposition 1 as follows [40].…”
Section: Label Subset Selectionmentioning
confidence: 98%
See 1 more Smart Citation
“…Equation 10indicates that the score value of each feature is influenced by the entropy value of each label, and this fact implies Proposition 1 as follows [40].…”
Section: Label Subset Selectionmentioning
confidence: 98%
“…However, these MI-based score functions commonly require the calculation of the dependencies between all variable pairs composed of a feature and a label [14]. Thus, they share the same drawback in terms of computational efficiency because labels known to have no influence on the evaluation of feature importance are included in the calculations [15,40]. In contrast to our previous study, our method proposed in this study discards unimportant labels explicitly prior to any multilabel learning process.…”
Section: Multilabel Feature Selectionmentioning
confidence: 99%
“…Traditional multi-label feature selection methods can be divided into two groups: problem transformation and algorithm adaptation [32], [33]. The problem transformation methods involve two steps: (1) transforming the multilabel data sets to multiple groups of single-label data sets;…”
Section: Related Workmentioning
confidence: 99%
“…Conventional multi-label feature selection methods can be divided into two groups to deal with the multi-label data sets: problem transformation and algorithm adaptation [ 37 , 38 ]. The problem transformation methods include two steps: (1) transform the multi-label data set to numerous single-label data sets; (2) select the relevant features from the transformed data sets.…”
Section: Related Workmentioning
confidence: 99%