2023
DOI: 10.1016/j.eswa.2022.118861
|View full text |Cite
|
Sign up to set email alerts
|

Group-preserving label-specific feature selection for multi-label learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 32 publications
(7 citation statements)
references
References 56 publications
0
7
0
Order By: Relevance
“…Experimental findings reveal that our proposed supervised MEFS method experiences an increase in time complexity as the number of source domain samples increases, while it cannot guarantee optimal feature selection on the target domain, thus exhibiting limitations in both performance and efficiency. Recently, researchers have proposed efficient and unsupervised multi-label feature selection methods (Zhang et al, 2020b , 2023 ) that demonstrate promising performance when dealing with larger sample sets. These feature selection methods have the potential to be expanded and adapted for the selection of more complex MI-EEG feature representations.…”
Section: Discussionmentioning
confidence: 99%
“…Experimental findings reveal that our proposed supervised MEFS method experiences an increase in time complexity as the number of source domain samples increases, while it cannot guarantee optimal feature selection on the target domain, thus exhibiting limitations in both performance and efficiency. Recently, researchers have proposed efficient and unsupervised multi-label feature selection methods (Zhang et al, 2020b , 2023 ) that demonstrate promising performance when dealing with larger sample sets. These feature selection methods have the potential to be expanded and adapted for the selection of more complex MI-EEG feature representations.…”
Section: Discussionmentioning
confidence: 99%
“…CLML [9] algorithm proposed by Li et al first uses a norm in the LSF framework to extract the common features of the labels. Subsequently, the GLFS [21] algorithm proposed by Zhang et al builds a grouppreserving optimization framework for feature selection by learning the common features of similar labels and the private features of each label using K-means clustering. Based on the above analysis, we adopt a causal learning algorithm to learn asymmetric LC among labels in LSF learning framework.…”
Section: Related Workmentioning
confidence: 99%
“…Since the multilabel learning framework is trained using known features and labeled samples, based on this feature researchers have proposed the following five commonly used evaluation metrics in multilabel learning evaluation: the Average Precision (AP), Coverage (CV), Hamming Loss (HL), Ranking Loss (RL), and One-Error (OE), [12] to judge the effectiveness of the algorithm.…”
Section: Multi-label Learning Evaluation Metricmentioning
confidence: 99%