2023
DOI: 10.1049/cvi2.12228
|View full text |Cite
|
Sign up to set email alerts
|

Low‐rank preserving embedding regression for robust image feature extraction

Abstract: Although low‐rank representation (LRR)‐based subspace learning has been widely applied for feature extraction in computer vision, how to enhance the discriminability of the low‐dimensional features extracted by LRR based subspace learning methods is still a problem that needs to be further investigated. Therefore, this paper proposes a novel low‐rank preserving embedding regression (LRPER) method by integrating LRR, linear regression, and projection learning into a unified framework. In LRPER, LRR can reveal t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 41 publications
0
4
0
Order By: Relevance
“…Dimensionality reduction methods mainly include linear and nonlinear approaches [ 61 ]. Linear LRPER [ 62 ] and the nonlinear method T-SNE [ 63 ] are two popular dimensionality reduction methods. Given T-SNE’s advantage in preserving local structure, we opt for T-SNE as our dimensionality reduction tool.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Dimensionality reduction methods mainly include linear and nonlinear approaches [ 61 ]. Linear LRPER [ 62 ] and the nonlinear method T-SNE [ 63 ] are two popular dimensionality reduction methods. Given T-SNE’s advantage in preserving local structure, we opt for T-SNE as our dimensionality reduction tool.…”
Section: Methodsmentioning
confidence: 99%
“…Dimensionality reduction methods mainly include linear and nonlinear approaches [61]. Linear LRPER [62] and the nonlinear method T-SNE [63] In contrast, DrugBank performs optimally with only 1-Hop neighbors due to its higher node degree, where a larger subgraph might introduce noise, potentially counteracting the benefits. In the concluding model, the parameter H is set to 2 for the BioSNAP and AdverseDDI datasets, whereas it is set to 1 for the DrugBank dataset.…”
Section: Visualizationmentioning
confidence: 99%
“…However, the computational cost of SVM is very high when the number of classes is relatively large. To treat this problem, linear regression (LR) was developed for classification [5, 6]. However, the LR‐based methods exactly transform the labels into a strict binary matrix, which results the label fitting with few freedom.…”
Section: Introductionmentioning
confidence: 99%
“…Based on existing visual insect recognition research in recent years, with the development of visual insect recognition research, this study briefly divides these works into legal computer vision technology-based and deep learning-based methods. Traditional computer vision methods, exemplified by works such as [3,4], rely on feature extraction and automated classification, but they often exhibit lower accuracy or limited generalization. By contrast, applying deep learning based on convolutional neural networks (CNNs) to insect recognition can more accurately identify insects without the need for manual feature extraction.…”
Section: Introductionmentioning
confidence: 99%