Sparse representation-based classification (SRC), proposed by Wright et al., seeks the sparsest decomposition of a test sample over the dictionary of training samples, with classification to the most-contributing class.Because it assumes test samples can be written as linear combinations of their same-class training samples, the success of SRC depends on the size and representativeness of the training set. Our proposed classification algorithm enlarges the training set by using local principal component analysis to approximate the basis vectors of the tangent hyperplane of the class manifold at each training sample. The dictionary in SRC is replaced by a local dictionary that adapts to the test sample and includes training samples and their corresponding tangent basis vectors. We use a synthetic data set and three face databases to demonstrate that this method can achieve higher classification accuracy than SRC in cases of sparse sampling, nonlinear class manifolds, and stringent dimension reduction.given training set is insufficient to generalize the data set's class structure) [5], as well as occlusion and noise [2].In 2009, Wright et al. proposed sparse representation-based classification (SRC) [2]. SRC was motivated by the recent boom in the use of sparse representation in signal processing (see, e.g., the work of Candès [6]). The catalyst of these advancements was the discovery that, under certain conditions, the sparsest representation of a signal using an over-complete set of vectors (often called a dictionary) could be found by minimizing the 1 -norm of the representation coefficient vector [7]. Since the 1 -minimization problem is convex, this gave rise to a tractable approach to obtaining the sparsest solution.SRC applies this relationship between the minimum 1 -norm and the sparsest solution to classification.The algorithm seeks the sparsest decomposition of a test sample over the dictionary of training samples via 1minimization, with classification to the class whose corresponding portion of the representation approximates the test sample with least error. The method assumes that class manifolds are linear subspaces, so that the test sample can be represented using training samples in its ground truth class. Wright et al. [2] argue that this is precisely the sparsest decomposition of the test sample over the training set. They make the case that sparsity is critical to high-dimensional image classification and that, if properly harnessed, it can lead to superior classification performance, even on highly corrupted or occluded images. Further, good results can be achieved regardless of the choice of image features that are used for classification, provided that the number of retained features is large enough [2]. Though SRC was originally applied to face recognition, similar methods have been employed in clustering [8], dimension reduction [9], and texture and handwritten digit classification [10].The SRC assumption that class manifolds are linear subspaces is often violated; e.g., facial images that v...
We consider the decomposition of a signal over an overcomplete set of vectors. Minimization of the 1 -norm of the coefficient vector can often retrieve the sparsest solution (so-called " 1 / 0 -equivalence"), a generally NP-hard task, and this fact has powered the field of compressed sensing. Wright et al.'s sparse representation-based classification (SRC) applies this relationship to machine learning, wherein the signal to be decomposed represents the test sample and columns of the dictionary are training samples. We investigate the relationships between 1 -minimization, sparsity, and classification accuracy in SRC. After proving that the tractable, deterministic approach to verifying 1 / 0 -equivalence fundamentally conflicts with the high coherence between same-class training samples, we demonstrate that 1 -minimization can still recover the sparsest solution when the classes are well-separated. Further, using a nonlinear transform so that sparse recovery conditions may be satisfied, we demonstrate that approximate (not strict) equivalence is key to the success of SRC.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.