2018
DOI: 10.1609/aaai.v32i1.11721
|View full text |Cite
|
Sign up to set email alerts
|

SC2Net: Sparse LSTMs for Sparse Coding

Abstract: The iterative hard-thresholding algorithm (ISTA) is one of the most popular optimization solvers to achieve sparse codes. However, ISTA suffers from following problems: 1) ISTA employs non-adaptive updating strategy to learn the parameters on each dimension with a fixed learning rate. Such a strategy may lead to inferior performance due to the scarcity of diversity; 2) ISTA does not incorporate the historical information into the updating rules, and the historical information has been proven helpful to speed u… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
7
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 27 publications
(7 citation statements)
references
References 18 publications
0
7
0
Order By: Relevance
“…In the optimization community, it has been extensively studied and proved [25], [26], [27] that incorporating historical information contributes to improving the algorithm performance. Inspired by the high-level ideas from the previous researches, researchers proposed adaptive ISTA in [28] to integrate and make use of historical information by introducing two adaptive momentum vectors f and i into ISTA in each iteration, which is formulated as follows:…”
Section: A Adaptive Ista and Sc2netmentioning
confidence: 99%
See 1 more Smart Citation
“…In the optimization community, it has been extensively studied and proved [25], [26], [27] that incorporating historical information contributes to improving the algorithm performance. Inspired by the high-level ideas from the previous researches, researchers proposed adaptive ISTA in [28] to integrate and make use of historical information by introducing two adaptive momentum vectors f and i into ISTA in each iteration, which is formulated as follows:…”
Section: A Adaptive Ista and Sc2netmentioning
confidence: 99%
“…Usually, they are selected in by tediously hand-craft tuning, which takes a lot of time and cannot guarantee optimal performance. To address this issue, the author proposed sc2net in [28] by recasting the adaptive ISTA as a recurrent neural network to parameterize the two momentum vectors and learn them from data. The sc2net is built by sparse long short-term memory (SLSTM) [28] units, as it is demonstrated in Fig.…”
Section: A Adaptive Ista and Sc2netmentioning
confidence: 99%
“…To address such a computational difficulty of conventional DL methods, differentiable programming solutions Gregor and LeCun (2010); Zhou, Di, Du, Peng, Yang, Pan, Tsang, Liu, Qin and Goh (2018) have thus been developed. They take advantage of the efficiency of neural networks to reduce the learning time.…”
Section: Introductionmentioning
confidence: 99%
“…For example, LISTA Gregor and LeCun (2010) was first proposed to unfold iterative hardthresholding into an RNN format, thus speeding up SDL. Sparse LSTM (SLSTM) Zhou et al (2018) adapts LISTA to a Long Short Term Memory (LSTM) structure to automatically learn the dimension of the sparse representation. Although these approaches successfully resolve the computational limitation of the single-layer DL methods, none of them offers a fast, scalable and reliable solution for DDL methods.…”
Section: Introductionmentioning
confidence: 99%
“…A series of studies on LISTA have attracted increasing attentions and inspired many subsequent works in different aspects, including learning based optimization (Xie et al 2019;Sun et al 2016), design of DNNs (Metzler, Mousavi, and Baraniuk 2017;Zhang and Ghanem 2018;Zhou et al 2018;Chen et al 2020;Rick Chang et al 2017;Zhang et al 2020;Simon and Elad 2019) and interpreting the DNNs (Zarka et al 2020;Papyan, Romano, and Elad 2017;Aberdam, Sulam, and Elad 2019;Sulam et al 2018Sulam et al , 2019. Chen et al (2018b) proved the coupling relationship between W t 1 and W t 2 , i.e., W t 2 → (I − W t 1 A) when t → ∞, which greatly reduced the number of learnable parameters of LISTA.…”
Section: Introductionmentioning
confidence: 99%