2020
DOI: 10.1109/access.2020.3010862
|View full text |Cite
|
Sign up to set email alerts
|

Local Sensitive Dual Concept Factorization for Unsupervised Feature Selection

Abstract: In this paper, we present a novel Local Sensitive Dual Concept Learning (LSDCL) method for the task of unsupervised feature selection. We first reconstruct the original data matrix by the proposed dual concept learning model, which inherits the merit of co-clustering based dual learning mechanism for more interpretable and compact data reconstruction. We then adopt the local sensitive loss function, which emphasizes more on most similar pairs with small errors to better characterize the local structure of data… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2025
2025

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 40 publications
0
2
0
Order By: Relevance
“…Their solution is to calculate the dependence between features using interrelationships and apply it to the process of selecting an unsupervised feature based on the regression model. One of the most recent studies by Zhao et al (2020) points out that just as any data can be reconstructed through other data, any feature can be reconstructed by other features, therefore instead of subspace clustering of features, an alternative method can be used that has smaller dimensions because the similarity matrix of features has high dimensions. They consider a concise estimate of data reconstruction, so instead of using subspace clustering for samples or features, they use triple matrix factorization, which simultaneously clusters data and features with lower-dimensional matrices that make the results more stable and reliable.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Their solution is to calculate the dependence between features using interrelationships and apply it to the process of selecting an unsupervised feature based on the regression model. One of the most recent studies by Zhao et al (2020) points out that just as any data can be reconstructed through other data, any feature can be reconstructed by other features, therefore instead of subspace clustering of features, an alternative method can be used that has smaller dimensions because the similarity matrix of features has high dimensions. They consider a concise estimate of data reconstruction, so instead of using subspace clustering for samples or features, they use triple matrix factorization, which simultaneously clusters data and features with lower-dimensional matrices that make the results more stable and reliable.…”
Section: Related Workmentioning
confidence: 99%
“…Early filter-based approaches were statistical/information-based methods including (Mitra et al, 2002), (Ferreira & Figueiredo, 2012), or Bio-inspired (Tabakhi et al, 2014), while late approaches are based on sparse/spectral learning. Last mentioned techniques have well demonstrated the importance of the extracted features' ability in capturing the cluster structure of data, reducing the error of reconstruction as well as keeping the local structure of data (Zhao et al, 2020). In filter models, the feature correlation, namely, redundancy (dependence among features) of features has a great impact on machine learning performance.…”
Section: Introductionmentioning
confidence: 99%