2018
DOI: 10.1002/widm.1240
|View full text |Cite
|
Sign up to set email alerts
|

Multilabel feature selection: A comprehensive review and guiding experiments

Abstract: Feature selection has been an important issue in machine learning and data mining, and is unavoidable when confronting with high‐dimensional data. With the advent of multilabel (ML) datasets and their vast applications, feature selection methods have been developed for dimensionality reduction and improvement of the classification performance. In this work, we provide a comprehensive review of the existing multilabel feature selection (ML‐FS) methods, and categorize these methods based on different perspective… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
37
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 94 publications
(37 citation statements)
references
References 133 publications
0
37
0
Order By: Relevance
“…F-score [ 60 ] is another feature selection approach that quantifies the discriminative ability of variables (features) based on the following equation: where c is the number of classes, n is the number of features, number of samples of feature i in class k , and is the training sample for feature i in class k . Features are ranked based on F-score , such that a higher F-score value corresponds to most discriminative feature.…”
Section: Architecture Of MI Based Bcimentioning
confidence: 99%
“…F-score [ 60 ] is another feature selection approach that quantifies the discriminative ability of variables (features) based on the following equation: where c is the number of classes, n is the number of features, number of samples of feature i in class k , and is the training sample for feature i in class k . Features are ranked based on F-score , such that a higher F-score value corresponds to most discriminative feature.…”
Section: Architecture Of MI Based Bcimentioning
confidence: 99%
“…The one error, coverage, ranking loss, and average precision were used to evaluate the overall performance of all representation methods in ATC classification. These metrics are defined in detail in [131], and are frequently used for evaluating the performance of ATC classifiers. The area under precision recall (AUPR) curve and the area under receiver operating characteristic(AUROC) curve were employed to evaluate the performance of all representation methods in bio-link prediction.…”
Section: Experiments Settings and Evaluation Metricsmentioning
confidence: 99%
“…Some comprehensive literature reviews and research articles [ 15 , 16 , 17 , 18 ] have discussed the problem of feature selection (FS) in the MLC problems. There are different methods used to select relevant features from the MLC datasets.…”
Section: Preliminariesmentioning
confidence: 99%