2018
DOI: 10.1109/tci.2018.2840334
|View full text |Cite
|
Sign up to set email alerts
|

Convolutional Dictionary Learning: A Comparative Review and New Algorithms

Abstract: Convolutional sparse representations are a form of sparse representation with a dictionary that has a structure that is equivalent to convolution with a set of linear filters. While effective algorithms have recently been developed for the convolutional sparse coding problem, the corresponding dictionary learning problem is substantially more challenging. Furthermore, although a number of different approaches have been proposed, the absence of thorough comparisons between them makes it difficult to determine w… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
122
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 157 publications
(122 citation statements)
references
References 38 publications
0
122
0
Order By: Relevance
“…To solve the problem (6), we utilize the Alternating Direction Method of Multipliers (ADMM) [50]- [53] in this paper. We refer the readers to [54] for other algorithms to solve this problem. By introducing dual variable U, penalty parameter ρ and auxiliary variable Y, the corresponding scaled augmented Lagrangian function is defined as [50]:…”
Section: A Sparse Coefficients Updatementioning
confidence: 99%
“…To solve the problem (6), we utilize the Alternating Direction Method of Multipliers (ADMM) [50]- [53] in this paper. We refer the readers to [54] for other algorithms to solve this problem. By introducing dual variable U, penalty parameter ρ and auxiliary variable Y, the corresponding scaled augmented Lagrangian function is defined as [50]:…”
Section: A Sparse Coefficients Updatementioning
confidence: 99%
“…One class of approaches, which is the one we follow in this work, uses greedy methods to solve the original problem with the 0 quasi-norm. Another class of approaches relaxes the 0 quasi-norm to the 1 norm, which converts the CSC objective into a convex optimization problem [12], [13], [14]. The advantage of greedy approaches is that they are more efficient computationally [7].…”
Section: Optimization Objectivementioning
confidence: 99%
“…The majority of existing CDU frameworks estimate the templates {h c } C c=1 by minimizing the error of reconstructing y using its linear approximation Hx. The key differences between existing approaches are the constraints imposed on the templates and the optimization methods used, as detailed in a recent survey [12]. To the best of our knowledge, existing CDU approaches do not address the problem of learning the templates in the presence of time-quantization errors.…”
Section: Cdu Frameworkmentioning
confidence: 99%
“…For instance, the blind gain and phase calibration problem [LLB16, LS18, LLB18] is closely related to the MCS-BD problem, as mentioned by [LB18]. It is also of great interest to extend our approach for solving the so-called convolutional dictionary learning problem [CF17,GCW18], in which each measurement consists of a superposition of multiple circulant convolutions:…”
Section: Future Directionsmentioning
confidence: 99%