2003
DOI: 10.1162/089976603762552951
|View full text |Cite
|
Sign up to set email alerts
|

Dictionary Learning Algorithms for Sparse Representation

Abstract: Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of suc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
450
0

Year Published

2006
2006
2021
2021

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 707 publications
(450 citation statements)
references
References 40 publications
0
450
0
Order By: Relevance
“…As per the dictionary that leads to sparse decomposition, although working with pre-defined dictionaries may be simple and fast, their performance might be not good for every task, due to their global-adaptivity nature [49]. Instead, learned dictionaries are adaptive to both the signals and the processing task at hand, thus resulting in a far better performance [50].…”
Section: Sparse Image Representationmentioning
confidence: 99%
“…As per the dictionary that leads to sparse decomposition, although working with pre-defined dictionaries may be simple and fast, their performance might be not good for every task, due to their global-adaptivity nature [49]. Instead, learned dictionaries are adaptive to both the signals and the processing task at hand, thus resulting in a far better performance [50].…”
Section: Sparse Image Representationmentioning
confidence: 99%
“…The minimum norm solution has the tendency to spread the energy rather than obtaining sparse solution (15,16). To derive FOCUSS, let us consider the following optimization problem: find x = Wq, [5] where x is the unknown image, W is a weighting matrix, and q is computed from the following constrained minimization problem:…”
Section: Review Of Focussmentioning
confidence: 99%
“…The main contribution of this article is to demonstrate that a new type of sparse reconstruction algorithm called the focal underdetermined system solver (FOCUSS) (14)(15)(16) is suitable for projection reconstruction MR imaging. FOCUSS was originally designed for electroencephalogram (EEG) and magnetoencephalography (MEG) source localization problems by obtaining sparse solutions from successive quadratic optimization problems (14,15).…”
mentioning
confidence: 99%
“…These reconstruction algorithms are generally in the state-of-the-art compressive sensing (CS) framework, utilizing prior knowledge effectively and permitting accurate and stable reconstruction from a more limited amount of raw data than requested by the classic Shannon sampling theory. CS-inspired reconstruction algorithms can be roughly categorized into the following stages (Wang et al , 2011): (1) The 1st stage: Candes’ total variation (TV) minimization method and variants (initially used for MRI and later on tried out for CT) (Li and Santosa, ’96; Jonsson et al , ’98; Candes and Tao, 2005; Landi and Piccolomini, 2005; Yu et al , 2005; Candes et al , 2006, 2008; Block et al , 2007; Landi et al , 2008; Sidky and Pan, 2008; Yu and Wang, 2009); (2) the 2nd stage: Soft-thresholding method adapted for X-ray CT to guarantee the convergence (Daubechies et al , 2004; Yu and Wang, 2010; Liu et al , 2011; Yu et al , 2011); and (3) the 3rd stage: Dictionary learning (DL) and non-local mean methods being actively developed by our group and others (Kreutz-Delgado et al , 2003; Gao et al , 2011; Lu et al , 2012; Xu et al , 2012; Zhao et al , 2012a,b). …”
Section: Introductionmentioning
confidence: 99%