2010
DOI: 10.1109/tsp.2009.2036477
|View full text |Cite
|
Sign up to set email alerts
|

Double Sparsity: Learning Sparse Dictionaries for Sparse Signal Approximation

Abstract: Abstract-An efficient and flexible dictionary structure is proposed for sparse and redundant signal representation. The proposed sparse dictionary is based on a sparsity model of the dictionary atoms over a base dictionary, and takes the form D = ΦA where Φ is a fixed base dictionary and A is sparse. The sparse dictionary provides efficient forward and adjoint operators, has a compact representation, and can be effectively trained from given example data. In this, the sparse structure bridges the gap between i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
401
0

Year Published

2012
2012
2018
2018

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 547 publications
(401 citation statements)
references
References 41 publications
0
401
0
Order By: Relevance
“…Dictionary learning algorithms are often sensitive to the choice of m. The update step can either be sequential (one atom at a time) [51,52], or parallel (all atoms at once) [53,54]. A dictionary with sequential update, although computationally a bit expensive, will generally provide better performance than the parallel update, due to the finer tuning of each dictionary atom.…”
Section: Sparse Image Representationmentioning
confidence: 99%
“…Dictionary learning algorithms are often sensitive to the choice of m. The update step can either be sequential (one atom at a time) [51,52], or parallel (all atoms at once) [53,54]. A dictionary with sequential update, although computationally a bit expensive, will generally provide better performance than the parallel update, due to the finer tuning of each dictionary atom.…”
Section: Sparse Image Representationmentioning
confidence: 99%
“…Other interesting dictionaries are characterized by several layers. Such dictionaries can be constructed as the composition of a fixed transform and learned dictionaries [20,17]. They can also be dictionaries made of two layers based on a sparsifying transform and a sampling matrix (both layers can be learnt by the algorithm investigated in [3]).…”
Section: Related Workmentioning
confidence: 99%
“…This model provides adaptivity via modification of the matrix A [16], [11]. First note that Φ(x i ) − Φ(X)A i γ i 2 2 can be kernelized as follows,…”
Section: Non-linear Discriminative Dictionary (Nldd)mentioning
confidence: 99%
“…This update can be computed efficiently by utilizing approximate kernel-KSVD algorithm [16] in the feature space. …”
Section: Dictionary Updatementioning
confidence: 99%