Linear Discriminant Analysis (LDA) is a popular technique for supervised dimensionality reduction, and its performance is satisfying when dealing with Gaussian distributed data. However, the neglect of local data structure makes LDA inapplicable to many real-world situations. So some works focus on the discriminant analysis between neighbor points, which can be easily affected by the noise in the original data space. In this paper, we propose a new supervised dimensionality reduction method, Locality Adaptive Discriminant Analysis (LADA), to learn a representative subspace of the data. Compared to LDA and its variants, the proposed method has three salient advantages: (1) it finds the principle projection directions without imposing any assumption on the data distribution; (2) it's able to exploit the local manifold structure of data in the desired subspace; (3) it exploits the points' neighbor relationship automatically without introducing any additional parameter to be tuned. Performance on synthetic datasets and real-world benchmark datasets demonstrate the superiority of the proposed method.
Linear Discriminant Analysis (LDA) is a widely-used technique for dimensionality reduction, and has been applied in many practical applications, such as hyperspectral image classification. Traditional LDA assumes that the data obeys the Gaussian distribution. However, in real-world situations, the high-dimensional data may be with various kinds of distributions, which restricts the performance of LDA. To reduce this problem, we propose the Discriminant Analysis with Graph Learning (DAGL) method in this paper. Without any assumption on the data distribution, the proposed method learns the local data relationship adaptively during the optimization. The main contributions of this research are threefold: (1) the local data manifold is captured by learning the data graph adaptively in the subspace;(2) the spatial information within the hyperspectral image is utilized with a regularization term; and (3) an efficient algorithm is designed to optimize the proposed problem with proved convergence. Experimental results on hyperspectral image datasets show that promising performance of the proposed method, and validates its superiority over the state-of-the-art.Remote Sens. 2018, 10, 836 2 of 15 extraction methods. PCA learns the feature subspace by maximizing the variance of the feature matrix. While LDA learns a linear transformation that minimizes the within-class distance and maximizes the between-class discrepancy. In this research, we mainly focus on LDA because it is able to use the prior knowledge and shows better performance in real-world applications [11].Though achieving good performance in many tasks, LDA has four major drawbacks on processing HSI data. Firstly, LDA suffers from the ill-posed problem [12]. LDA needs to compute the inverse matrix of the within-class scatter S w . When the data dimensionality exceeds the number of training samples, S w is irreversible. Thus, LDA cannot handle the HSI data with great number of spectral bands. Secondly, the feature dimensionality reduced by LDA is less than the class number, namely over-reducing problem [13]. Taking the Kennedy Space Center (KSC) dataset [16] for example, the class number is thirteen, and the rank of the between-class scatter S b is at most twelve. Thus, LDA could find at most twelve projection directions, which may be insufficient for retaining the useful information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.