2005
DOI: 10.1016/j.neunet.2005.06.041
|View full text |Cite
|
Sign up to set email alerts
|

Generalized 2D principal component analysis for face image representation and recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
123
0
1

Year Published

2006
2006
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 170 publications
(124 citation statements)
references
References 15 publications
0
123
0
1
Order By: Relevance
“…Metode ini menilai secara langsung matrik within-class scatter dari matrik citra tanpa transformasi citra ke vektor, dan hal itu mengatasi singular problem dalam matrik within-class scatter (Gao, 2008). TDLDA menggunakan fisher criterion untuk menemukan proyeksi diskriminatif yang optimal (Kong, 2005). Metode TDLDA dengan klasifikasi Support Vector Machine (SVM) memiliki hasil yang optimal dibandingkan TDLDA KNN, 2DPCA, dan Fisherface (Damayanti, 2010).…”
Section: Pendahuluanunclassified
“…Metode ini menilai secara langsung matrik within-class scatter dari matrik citra tanpa transformasi citra ke vektor, dan hal itu mengatasi singular problem dalam matrik within-class scatter (Gao, 2008). TDLDA menggunakan fisher criterion untuk menemukan proyeksi diskriminatif yang optimal (Kong, 2005). Metode TDLDA dengan klasifikasi Support Vector Machine (SVM) memiliki hasil yang optimal dibandingkan TDLDA KNN, 2DPCA, dan Fisherface (Damayanti, 2010).…”
Section: Pendahuluanunclassified
“…, X n which are each of size r 脳 c. It is frequently of interest to summarise the variability in the data by a linear projection to a lower-dimensional sub-space. Some recently developed and popular face recognition dimension reduction tools are 2DPCA (Yang et al 2004), matrix space image representations (Rangarajan 2001), bilateral-projection-based 2DPCA (B2DPCA) (Kong et al 2005), the generalized low rank approximation of matrices (GLRAM) (Ye 2005) and modular PCA (Gottumukkal and Asari 2004;Gao 2007).…”
Section: Maximum Likelihood Estimationmentioning
confidence: 99%
“…An alternative dimension reduction technique for matrices is bilateral 2D principal components analysis (B2DPCA) (Kong et al 2005) which is equivalent to generalised low rank approximations of matrices (GLRAM) (Ye 2005), which in turn is equivalent to the earlier method of learning matrix space image representations (Rangarajan 2001). The task is again to find matrices of orthonormal columns A q , B s as above, but the aim is quite different in that low dimensional projections of the images themselves are required with little loss in information.…”
Section: Bilateral 2d Principal Components Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…Kernel techniques can effectively to the traditional subspace methods for nonlinear improvement. Hui Kong et proposed a two-dimensional kernel principal component analysis (K2DPCA) method [5], which uses the kernel learning method not only to extract nonlinear features of human face but also to map the nonlinear inseparable face image into the high-dimensional feature space H , establishing the optimal hyper plane in H to realize the linear separable [6]. But PCA, 2DPCA and K2DPCA methods all do not make full use of class information of training samples [7].…”
Section: Introductionmentioning
confidence: 99%