2016
DOI: 10.1016/j.ins.2016.07.008
|View full text |Cite
|
Sign up to set email alerts
|

A novel attribute reduction approach for multi-label data based on rough set theory

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 77 publications
(26 citation statements)
references
References 33 publications
0
26
0
Order By: Relevance
“…is a set of decision attributes. In general, T ∩ C = ∅ and each attribute from T ∩ C forms a mapping f : U → V , where V is the value domain of T and C [27,28]. Each nonempty subset P ⊆ T determines an indiscernibility relation as…”
Section: The Rough Set Theorymentioning
confidence: 99%
“…is a set of decision attributes. In general, T ∩ C = ∅ and each attribute from T ∩ C forms a mapping f : U → V , where V is the value domain of T and C [27,28]. Each nonempty subset P ⊆ T determines an indiscernibility relation as…”
Section: The Rough Set Theorymentioning
confidence: 99%
“…For example, the RS theory mainly focuses on the attribute reduction problem [23,28] and shows much potential in the multilabel learning [29,36], while the PCA is usually applied to the dimension reduction problem in the machine learning [9,16] and multi-objective optimization (MOO) [18,20]. What's more, the RS theory reduces attributes based on the partition of equivalence relation, which can be accomplished through the evaluation metric [30,31], like mutual information and information entropy [10,38]. In general, the RS theory achieves better performance on the categorical attribute dataset [1,11].…”
Section: Literature Review 21 Index System Reduction Algorithmsmentioning
confidence: 99%
“…Another disadvantage of this classifier is that it does not take into consideration the dependency between labels. However, it is still one of the most popular ML classifiers that is frequently used in many ML-FS papers (Doquire & Verleysen, 2013a;Jungjit & Freitas, 2015a;Lee & Kim, 2015a;Li, Li, Zhai, Wang, & Zhang, 2016;Lin, Hu, Liu, Chen, & Duan, 2016).…”
Section: Knn-based ML Classifiersmentioning
confidence: 99%