2013
DOI: 10.1109/tsp.2012.2226449
|View full text |Cite
|
Sign up to set email alerts
|

Learning Sparsifying Transforms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

4
419
0
5

Year Published

2013
2013
2023
2023

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 328 publications
(428 citation statements)
references
References 41 publications
4
419
0
5
Order By: Relevance
“…There are two well known sparsity models [8] to which the dictionary learning approach is applied. One is the synthesis sparsity model which states that a linear combination of a few of the dictionary atoms is sufficient to represent the signal [8] [9]. The other is analysis sparsity model which suggests that the representation of a signal in the dictionary is sparse [8] [9].…”
Section: Introductionmentioning
confidence: 99%
See 4 more Smart Citations
“…There are two well known sparsity models [8] to which the dictionary learning approach is applied. One is the synthesis sparsity model which states that a linear combination of a few of the dictionary atoms is sufficient to represent the signal [8] [9]. The other is analysis sparsity model which suggests that the representation of a signal in the dictionary is sparse [8] [9].…”
Section: Introductionmentioning
confidence: 99%
“…One is the synthesis sparsity model which states that a linear combination of a few of the dictionary atoms is sufficient to represent the signal [8] [9]. The other is analysis sparsity model which suggests that the representation of a signal in the dictionary is sparse [8] [9]. Ravishankar and Bresler explored the transform sparsity model and proposed a parameter dependent transform learning (TL) method for square transforms in [9], for orthogonal transforms (TLortho) in [11] and for overcomplete transforms in [10].…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations