2017
DOI: 10.1016/j.sigpro.2017.04.018
|View full text |Cite
|
Sign up to set email alerts
|

Enhanced regularized least square based discriminative projections for feature extraction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 33 publications
0
2
0
Order By: Relevance
“…The spatial resolution of all test images is set to 256 256  , and to make sure the registration has been realized, we take the feature-based registration algorithm for each pair, i.e. the method of complementary Harris feature point extraction based on mutual information [37], which has strong robustness and is able to adapt to various image characteristics and variations.…”
Section: Source Imagesmentioning
confidence: 99%
“…The spatial resolution of all test images is set to 256 256  , and to make sure the registration has been realized, we take the feature-based registration algorithm for each pair, i.e. the method of complementary Harris feature point extraction based on mutual information [37], which has strong robustness and is able to adapt to various image characteristics and variations.…”
Section: Source Imagesmentioning
confidence: 99%
“…Yang et al [12] developed a regularised least square based discriminative projections, which maximises the between‐class scatter adopted by LDA and minimises the within‐class compactness by the reconstruction residual from the same class. Yuan et al [13] propose an enhanced regularised least square based discriminative projections, where each sample is reconstructed by all the associated coefficients and the distances between each sample and all its reconstructed within‐class samples are minimised. However, both 1‐graph and 2‐graph only collect the local structure of each datum and lose sight of the global structure of the data from the perspective of the entire dataset.…”
Section: Introductionmentioning
confidence: 99%