2019
DOI: 10.1109/access.2019.2934742
|View full text |Cite
|
Sign up to set email alerts
|

Joint Label-Density-Margin Space and Extreme Elastic Net for Label-Specific Features

Abstract: The label-specific features learning is a kind of framework for extracting the specific features of each label for classification. At present, the label-specific features algorithm is generally based on the original label space to find a particular feature set. This kind of extraction method for label-specific features has a general adaptation when the label density is balanced. However, in most multi-label data sets, the number of positive and negative labels varies greatly, and the label density is unbalance… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 40 publications
0
2
0
Order By: Relevance
“…The OAL1-ELM can be trained on one-by-one samples or in batch mode. Pei and Wang [135] used ELM for label-specific feature learning.…”
Section: Representation/feature Learningmentioning
confidence: 99%
“…The OAL1-ELM can be trained on one-by-one samples or in batch mode. Pei and Wang [135] used ELM for label-specific feature learning.…”
Section: Representation/feature Learningmentioning
confidence: 99%
“…To deal with degraded raw data, the back-propagation neural network and weight application to failure times (WAFT) prediction technique are used to establish the rolling bearing prediction model. In [22], a RUL forecasting approach was presented by utilizing competitive learning, where the statistical properties obtained by using the continuous wavelet transform (CWT) to deal with the data were taken as an input of the recurrent neural network (RNN). The similar defect propagation stages of the monitored bearing are represented by clustering the input data.…”
Section: Introductionmentioning
confidence: 99%