2005
DOI: 10.1016/j.patrec.2004.09.014
|View full text |Cite
|
Sign up to set email alerts
|

Regularization studies of linear discriminant analysis in small sample size scenarios with application to face recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
149
0
1

Year Published

2005
2005
2015
2015

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 276 publications
(152 citation statements)
references
References 27 publications
2
149
0
1
Order By: Relevance
“…This in turn increases the likelihood that S w is degenerate, and further necessitates the need for a method such as RS-LDA. Other LDA variants offer solutions to this small sample size problem [13,7]; however, RS-LDA is preferred due to the ease of implementation and wider range of successful applications in face recognition [23,11,8,9].…”
Section: Random Subspace Face Recognitionmentioning
confidence: 99%
“…This in turn increases the likelihood that S w is degenerate, and further necessitates the need for a method such as RS-LDA. Other LDA variants offer solutions to this small sample size problem [13,7]; however, RS-LDA is preferred due to the ease of implementation and wider range of successful applications in face recognition [23,11,8,9].…”
Section: Random Subspace Face Recognitionmentioning
confidence: 99%
“…The representative LSL methods, including PCA (Eigenface), LPP (Laplacianface) [7], FLDA (Fisherface) [2], RLDA [4] and SPP [21], are used for comparison. The code of the proposed USCP and SSCP methods can be downloaded at http://www4.comp.polyu.edu.hk/~cslzhang/code.htm.…”
Section: Resultsmentioning
confidence: 99%
“…Representative LSL methods include principal component analysis (PCA), e.g., Eigenface [1], Fisher linear discriminant analysis (FLDA) [2][3][4], the manifold learning [5][6] based locality preserving projection (LPP) [7], local discriminant embedding (LDE) [8], graph embedding [9], etc. According to if the class label information of the training samples is exploited, the LSL methods can be categorized into unsupervised methods (e.g., PCA and LPP) and supervised methods (e.g., FLDA [2], regularized LDA (RLDA) [4] and LDE).…”
Section: Introductionmentioning
confidence: 99%
“…To this end, we first introduce a regularized Fisher's criterion [14]. The criterion, which is utilized in this work instead of the conventional one (Eq.6), can be expressed as follows:…”
Section: A Regularized Fisher's Criterionmentioning
confidence: 99%