Due to the capability of effectively learning intrinsic structures from high-dimensional data, techniques based on sparse representation have begun to display an impressive impact in several fields, such as image processing, computer vision and pattern recognition. Learning sparse representations is often computationally expensive due to the iterative computations needed to solve convex optimization problems in which the number of iterations is unknown before convergence. Moreover, most sparse representation algorithms focus only on determining the final sparse representation results and ignore the changes in the sparsity ratio during iterative computations. In this paper, two algorithms are proposed to learn sparse representations based on locality-constrained linear representation learning with probabilistic simplex constraints. Specifically, the first algorithm, called approximated local linear representation (ALLR), obtains a closed-form solution from individual locality-constrained sparse representations. The second algorithm, called approximated local linear representation with symmetric constraints (ALLRSC), further obtains all symmetric sparse representation results with a limited number of computations; notably, the sparsity and convergence of sparse representations can be guaranteed based on theoretical analysis. The steady decline in the sparsity ratio during iterative computations is a critical factor in practical applications. Experimental results based on public datasets demonstrate that the proposed algorithms perform better than several state-of-the-art algorithms for learning with high-dimensional data.
Incomplete multiview data are collected from multiple sources or characterized by multiple modalities, where the features of some samples or some views may be missing. Incomplete multiview clustering aims to partition the data into different groups by taking full advantage of the complementary information from multiple incomplete views. Most existing methods based on matrix factorization or subspace learning attempt to recover the missing views or perform imputation of the missing features to improve clustering performance. However, this problem is intractable due to a lack of prior knowledge, e.g., label information or data distribution, especially when the missing views or features are completely damaged. In this paper, we proposed an augmented sparse representation (ASR) method for incomplete multiview clustering. We first introduce a discriminative sparse representation learning (DSRL) model, which learns the sparse representations of multiple views as applied to measure the similarity of the existing features. The DSRL model explores complementary and consistent information by integrating the sparse regularization item and a consensus regularization item, respectively. Simultaneously, it learns a discriminative dictionary from the original samples. The sparsity constrained optimization problem in the DSRL model can be efficiently solved by the alternating direction method of multipliers. Then, we present a similarity fusion scheme, namely, a sparsity augmented fusion of sparse representations, to obtain a sparsity augmented similarity matrix across different views for spectral clustering. Experimental results on several datasets demonstrate the effectiveness of the proposed ASR method for incomplete multiview clustering.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.