2022
DOI: 10.1016/j.knosys.2021.107924
|View full text |Cite
|
Sign up to set email alerts
|

Non-negative multi-label feature selection with dynamic graph constraints

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 39 publications
(5 citation statements)
references
References 31 publications
0
5
0
Order By: Relevance
“…Both [10,11] used dynamic graphs to learn the basic manifold structure of samples or labels and then combined them with linear regression to build feature selection models. Reference [10] strengthens the local connection between samples and labels by combining with subspace to better special features.…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…Both [10,11] used dynamic graphs to learn the basic manifold structure of samples or labels and then combined them with linear regression to build feature selection models. Reference [10] strengthens the local connection between samples and labels by combining with subspace to better special features.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Reference [10] strengthens the local connection between samples and labels by combining with subspace to better special features. Reference [11] strengthened the correlation between the weight matrix and sample space and between the weight matrix and label space by comprehensively restricting the weight matrix, making the weight matrix more representative of the weight of features and more accessible to distinguish features.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Firstly, four evaluation criteria are explained. Then we use ten different datasets (Corel5k, Delicious, Flags, Medical, Scene, Enron, GenBase, Social, Yeast, and Emotions) to test CRMIL and compare CRMIL with eight traditional multi-label feature selection algorithms, which are SCLS [ 26 ], D2F [ 30 ], FIMF [ 31 ], PMU [ 3 ], AMI [ 32 ], NMDG [ 33 ], FSSL [ 34 ], and MFS-MCDM [ 35 ].…”
Section: Resultsmentioning
confidence: 99%
“…en we use ten different datasets (Corel5k, Delicious, Flags, Medical, Scene, Enron, GenBase, Social, Yeast, and Emotions) to test CRMIL and compare CRMIL with eight traditional multi-label feature selection algorithms, which are SCLS [26], D2F [30], FIMF [31], PMU [3], AMI [32], NMDG [33], FSSL [34], and MFS-MCDM [35].…”
Section: Resultsmentioning
confidence: 99%