2012
DOI: 10.1109/jsen.2011.2113393
|View full text |Cite
|
Sign up to set email alerts
|

1.1 TMACS/mW Fine-Grained Stochastic Resonant Charge-Recycling Array Processor

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
14
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 25 publications
(14 citation statements)
references
References 23 publications
0
14
0
Order By: Relevance
“…The numerical results suggest that 2-layer BMNNs can work just as well as 2-layer RMNN, although they may require a larger width. The weights of the BMNNs we have trained can now be immediately implemented in a hardware chip, such as (Karakiewicz et al, 2012), significantly improving their speed and energy efficiency in comparison to software-based RMNNs. It remains to be seen whether deep BMNNs can compete with RMNNs with (usually, fine tuned) deep architectures, which achieve state-of-the-art performance.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The numerical results suggest that 2-layer BMNNs can work just as well as 2-layer RMNN, although they may require a larger width. The weights of the BMNNs we have trained can now be immediately implemented in a hardware chip, such as (Karakiewicz et al, 2012), significantly improving their speed and energy efficiency in comparison to software-based RMNNs. It remains to be seen whether deep BMNNs can compete with RMNNs with (usually, fine tuned) deep architectures, which achieve state-of-the-art performance.…”
Section: Discussionmentioning
confidence: 99%
“…For example, it could be very useful if weights are restricted to assume only binary values (e.g., ±1). This may allow a dense, fast and energetically efficient hardware implementation of MNNs (e.g., with the chip in (Karakiewicz et al, 2012), which can perform 10 12 operations per second with 1mW power efficiency). Limiting the weights to binary values only mildly reduces the (linear) computational capacity of a MNN (at most, by a logarithmic factor (Ji & Psaltis, 1998)).…”
Section: Introductionmentioning
confidence: 99%
“…Recently, a 10 15 operations/watt analog deep machinelearning engine composed of an 8x4 array of parallel reconfigurable analog computation cells (RAC) was presented in [19], which mimics the hierarchical presentation of information in the human brain to achieve robust automated feature extraction with the accuracy comparable to the baseline software simulations. In [20], a mixed-signal VLSI array with 1.1 TMACS (10 12 multiply-and-accumulates per second) per mW is presented, which is used in applications like pattern recognition and data compression.…”
Section: Analog Image and Video Processingmentioning
confidence: 99%
“…Vector–vector or matrix–vector multiplication (VVM/MVM) are pervasive operations and the basis of the most relevant and commonly used algorithms in signal processing: 1,2 fast Fourier transforms (FFTs), convolutions, digital filters, and neural networks are some of the most prominent examples. In particular, deep and convolutional neural networks (abbreviated as DNN and ConvNets, respectively), are nowadays receiving considerable attention in the area of machine learning due to its high efficacy in classification tasks.…”
Section: Introductionmentioning
confidence: 99%