2020
DOI: 10.1109/access.2020.3014916
|View full text |Cite
|
Sign up to set email alerts
|

Multilabel Feature Selection Using Mutual Information and ML-ReliefF for Multilabel Classification

Abstract: Recently, multilabel classification algorithms play an increasingly significant role in data mining and machine learning. However, some existing mutual information-based algorithms ignore the influence of the proportions of labels on the correlation degree between features and label sets. Besides, the correlation degree of label sets cannot be accurately measured in most traditional ReliefF algorithms, and the repeated calculation arises from the division of heterogeneous neighbors. To overcome these shortcomi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 53 publications
0
3
0
Order By: Relevance
“…e larger the frequency shift, the quicker the tag travels away from the antenna; when the tag is far away from the antenna, the echo signal has a lower frequency than the original signal. e larger the frequency shift, the quicker the tag travels relative to the antenna [14].…”
Section: Human Motion Feature Classification Algorithm In Moving Scenementioning
confidence: 99%
“…e larger the frequency shift, the quicker the tag travels away from the antenna; when the tag is far away from the antenna, the echo signal has a lower frequency than the original signal. e larger the frequency shift, the quicker the tag travels relative to the antenna [14].…”
Section: Human Motion Feature Classification Algorithm In Moving Scenementioning
confidence: 99%
“…The ReliefF algorithm is a typical filtered feature selection method that is computationally simple and widely used. When analyzing the classification problem, each time a sample R is randomly taken out from the training sample set, k-nearest neighbor samples K are identified from the set of similar samples, k-nearest neighbor samples M are identified from the set of dissimilar samples, and the score of each feature is updated according to Equation (4) [26]. Feature selection follows the principle of "aggregation within classes and dispersion between classes."…”
Section: Relieff Feature Selectionmentioning
confidence: 99%
“…It suggested the need for improvements in classifier accuracy and also reported a gap in work based on the wrapper and embedded approaches. An algorithm for feature selection for improving multi-label classification performance is proposed in [6], which produced significant outcomes for the fourteen multi-label datasets. Another MOMFS-multi-objective multi-label feature selection algorithm based on two-particle swarms is introduced in [7].…”
Section: Literature Reviewmentioning
confidence: 99%