2019
DOI: 10.1016/j.patcog.2018.12.020
|View full text |Cite
|
Sign up to set email alerts
|

A label-specific multi-label feature selection algorithm based on the Pareto dominance concept

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
28
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 70 publications
(28 citation statements)
references
References 27 publications
0
28
0
Order By: Relevance
“…Li et al [ 10 ] proposed a granular MLFS method that attempts to select a more compact feature subset using information granules of the labels instead of the entire label set. Kashef and Nezamabadi-pour [ 11 ] proposed a Pareto dominance-based multilabel feature filter for online feature selection, which concerns the number of features being added sequentially. Gonzalez-Lopez et al [ 12 , 13 ] proposed distributed models that measure the quality of each feature based on mutual information on Apache Spark.…”
Section: Related Workmentioning
confidence: 99%
“…Li et al [ 10 ] proposed a granular MLFS method that attempts to select a more compact feature subset using information granules of the labels instead of the entire label set. Kashef and Nezamabadi-pour [ 11 ] proposed a Pareto dominance-based multilabel feature filter for online feature selection, which concerns the number of features being added sequentially. Gonzalez-Lopez et al [ 12 , 13 ] proposed distributed models that measure the quality of each feature based on mutual information on Apache Spark.…”
Section: Related Workmentioning
confidence: 99%
“…In recent years, many algorithm adaptation-based multi-label feature selection methods that directly select features from the multi-label data set have been proposed. S Kashef and H Nezamabadi-pour [ 15 ] propose a multi-label feature selection algorithm based on the Pareto dominance concept that intends to select the label-specific features in multi-objective optimization problem. Sun et al.…”
Section: Related Workmentioning
confidence: 99%
“…The high-dimensional multi-label data set often contains a large number of irrelevant and redundant features that bring many disadvantages to the multi-label learning such as the computational burden and over-fitting [ 10 , 11 , 12 ]. To address this problem, many multi-label feature selection techniques have been proposed to select the informative feature subset from the original feature set and to discard irrelevant and redundant features [ 13 , 14 , 15 ]. Feature selection techniques not only reduce the computing costs but also improve the classification performance effectively [ 16 ].…”
Section: Introductionmentioning
confidence: 99%
“…At present, multilabel classification has drawn widespread attention, and the multilabel data with a set of labels contain a vast number of noisy, redundant or irrelevant features, and they decrease classification accuracy [1]. As an important preprocessing part of multilabel classification, feature selection aims to remove redundant or irrelevant features, eliminate the curse of dimensionality, classify data and obtain useful information to improve classification performance [2].…”
Section: Introductionmentioning
confidence: 99%