2012
DOI: 10.1109/jetcas.2012.2214615
|View full text |Cite
|
Sign up to set email alerts
|

Low Power Sparse Approximation on Reconfigurable Analog Hardware

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
33
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
6
1
1

Relationship

3
5

Authors

Journals

citations
Cited by 32 publications
(34 citation statements)
references
References 29 publications
1
33
0
Order By: Relevance
“…The network architecture being used in this study provably solves the optimization in Eq. (2) with strong convergence guarantees [27], can implement many variations of the sparse coding hypothesis (i.e., different sparsity-inducing cost functions) [28], and is implementable in neuromorphic analog circuits [29].…”
Section: Resultsmentioning
confidence: 99%
“…The network architecture being used in this study provably solves the optimization in Eq. (2) with strong convergence guarantees [27], can implement many variations of the sparse coding hypothesis (i.e., different sparsity-inducing cost functions) [28], and is implementable in neuromorphic analog circuits [29].…”
Section: Resultsmentioning
confidence: 99%
“…The design and implementation of analog circuits have traditionally been difficult, but recent advances in reconfigurable analog circuits (Twigg & Hasler, 2009) have improved many of the issues related to the design of these systems. In fact, the re-configurable platform described in Twigg and Hasler has been used to implement a small version of the LCA for solving BPDN (Shapero, Charles et al, in press; Shapero, Rozell et al, 2012), and preliminary tests of this implementation are consistent with simulations of the idealized LCA. These results lend encouragement to the idea that efficient analog circuits could be implemented for the variety of cost functions described in this letter.…”
Section: Discussionmentioning
confidence: 97%
“…In particular, each node consists of a leaky integrator and a nonlinear thresholding function, and it is driven by both feed-forward and lateral (inhibitory and excitatory) recurrent connections. This architecture has been implemented in neuromorphic hardware as a purely analog system (Shapero, Charles et al, in press) and by using integrate-and-fire spiking neurons for each node (Shapero, Rozell, et al, 2012). We also note that other types of network structures have also been proposed recently to approximately solve specific versions of the sparse approximation problem (Rehn & Sommer, 2007; Perrinet et al, 2004; Zylberberg et al, 2011; Hu, Genkin, & Chklovskii, 2012).…”
Section: Background and Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…where σ 2 is the measurement noise variance and the signal estimate is x = W z. BPDN has been a particularly popular approach due to its strong performance guarantees [7] and the development of many specialized optimization approaches [8]- [13]. As a guarantee of the measurement quality, we say that Φ satisfies the restricted isometry property with parameters 2S and δ (RIP(2S,δ)) with respect to W if for every 2S-sparse vector z we have that…”
Section: A Sparse Signal Estimationmentioning
confidence: 99%