2020 IEEE International Conference on Image Processing (ICIP) 2020
DOI: 10.1109/icip40778.2020.9190923
|View full text |Cite
|
Sign up to set email alerts
|

Cornet: Composite-Regularized Neural Network For Convolutional Sparse Coding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(8 citation statements)
references
References 25 publications
0
8
0
Order By: Relevance
“…However, the trade-off between the number of layers of the network and training examples, and the suboptimal recovery of low-amplitude reflection coefficients require further attention. One could also consider data-driven prior learning [30] based on a multi-penalty formulation [42] for solving the reflectivity inversion problem.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, the trade-off between the number of layers of the network and training examples, and the suboptimal recovery of low-amplitude reflection coefficients require further attention. One could also consider data-driven prior learning [30] based on a multi-penalty formulation [42] for solving the reflectivity inversion problem.…”
Section: Discussionmentioning
confidence: 99%
“…They proposed learned ISTA (LISTA), based on unrolling the update steps of ISTA [14] into layers of a feedforward neural network. Subsequent studies have demonstrated the efficacy of this class of modelbased deep learning architectures [33] in compressed sensing and sparse-recovery applications [32], [34], [35], [36], [37], [38], [39], [40], [41], [42], [43]. Deep-unfolding combines the advantages of both data-driven and iterative techniques.…”
Section: A Prior Artmentioning
confidence: 99%
“…Our algorithm is referred to as the nonuniform proximalaveraged thresholding algorithm (NuPATA-1), which relies on Majorization-Minimization (MM) [49] and the proximal average strategy [26], [27], [40]. Further, we unfold the NuPATA-1 iterations into a learnable network called nonuniform sparse proximal average network (NuSPAN-1).…”
Section: A Problem Formulation -Type-1mentioning
confidence: 99%
“…Gregor and LeCun [28] proposed the learned iterative shrinkage and thresholding algorithm (LISTA), based on unfolding the update steps of ISTA [30] into the layers of a neural network. This class of model-based architectures [31] has been demonstrated to be effective in solving sparse linear inverse problems [29], [32], [33], [34], [35], [36], [37], [38], [39], [40], [41].…”
Section: Introductionmentioning
confidence: 99%
“…In the context of deep unfolding networks, Jawali et al [23] considered an average of multiple penalties for solving the convolutional sparse coding problem. Recently, Mache et al [24] proposed a learnable composite regularization scheme based on a nonuniform sparse model for solving the problem of seismic reflectivity inversion.…”
Section: Introductionmentioning
confidence: 99%