2023
DOI: 10.1109/tnnls.2021.3119278
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Sparse Representation for Learning With High-Dimensional Data

Abstract: Due to the capability of effectively learning intrinsic structures from high-dimensional data, techniques based on sparse representation have begun to display an impressive impact in several fields, such as image processing, computer vision and pattern recognition. Learning sparse representations is often computationally expensive due to the iterative computations needed to solve convex optimization problems in which the number of iterations is unknown before convergence. Moreover, most sparse representation a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(2 citation statements)
references
References 46 publications
0
2
0
Order By: Relevance
“…Sparse representation is an extremely successful technique for representing high-dimensional data [33], [34]. Let the dictionary D = [d 1 , d 2 , ..., d n ] ∈ R d×n be a set of n vectors, where each column of the set is of length d. Given a vector x ∈ R d , its sparsest representation over the dictionary D can be approximately represented by a sparse linear combination of only a few columns of D. Specifically, a sparse representation vector z ∈ R n of x can be calculated via the following l 0 -norm minimization optimization problem:…”
Section: A Sparse Representation Theorymentioning
confidence: 99%
“…Sparse representation is an extremely successful technique for representing high-dimensional data [33], [34]. Let the dictionary D = [d 1 , d 2 , ..., d n ] ∈ R d×n be a set of n vectors, where each column of the set is of length d. Given a vector x ∈ R d , its sparsest representation over the dictionary D can be approximately represented by a sparse linear combination of only a few columns of D. Specifically, a sparse representation vector z ∈ R n of x can be calculated via the following l 0 -norm minimization optimization problem:…”
Section: A Sparse Representation Theorymentioning
confidence: 99%
“…It means that the original data can be reconstructed with high probability using a small number of observations [33] . The three core issues of compressed sensing theory are sparse representation of signals [34] , [35] , design of observation matrices, and reconstruction algorithms. In many practical problems, the data are two-dimensional matrices.…”
Section: Introductionmentioning
confidence: 99%