2020 IEEE International Solid- State Circuits Conference - (ISSCC) 2020
DOI: 10.1109/isscc19947.2020.9062979
|View full text |Cite
|
Sign up to set email alerts
|

33.1 A 74 TMACS/W CMOS-RRAM Neurosynaptic Core with Dynamically Reconfigurable Dataflow and In-situ Transposable Weights for Probabilistic Graphical Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
53
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 108 publications
(54 citation statements)
references
References 4 publications
0
53
0
1
Order By: Relevance
“…Memristive crosspoint has been shown able to accelerate different problems based on MVM, such as the training [16,27,38] and inference [20,41] of neural networks, image processing [18], sparse coding [29], optimization problems [6,24] and the solution of linear equations through iterative numerical approaches [14,43]. Integrated circuits comprising memristive arrays and the circuitry need to generate the input, such as digital-analog-converters (DAC), sense and read the outputs, such as transimpedance amplifiers (TIA) and analog-digital-converters (ADC), and cell selecting and routing, able to accelerate MVM have been proposed [5,37,42], outperforming modern processor both in throughput and energy saving [42].…”
Section: In-memory Matrix-vector-multiplication Acceleratormentioning
confidence: 99%
“…Memristive crosspoint has been shown able to accelerate different problems based on MVM, such as the training [16,27,38] and inference [20,41] of neural networks, image processing [18], sparse coding [29], optimization problems [6,24] and the solution of linear equations through iterative numerical approaches [14,43]. Integrated circuits comprising memristive arrays and the circuitry need to generate the input, such as digital-analog-converters (DAC), sense and read the outputs, such as transimpedance amplifiers (TIA) and analog-digital-converters (ADC), and cell selecting and routing, able to accelerate MVM have been proposed [5,37,42], outperforming modern processor both in throughput and energy saving [42].…”
Section: In-memory Matrix-vector-multiplication Acceleratormentioning
confidence: 99%
“…The chip operates at 1.8V supply for both digital and analog blocks. Measurements show each neuron operation with 63 nW static power drawn from a 1.8 V supply [9]. The total power (static+dynamic) consumption of the chip (256 I&F neurons, biasing and peripherals) is 140.6 µW for data throughput of 92.5 MSpikes/s.…”
Section: System-on-chip Architecture and Implementationmentioning
confidence: 99%
“…The neuron design supports a variety of activation functions, that can cater to deep learning and neuromorphic applications. At only a fraction of the energy efficiency compared to the state-of-the-art implementations, this architecture can be easily adopted and scaled for large networks [9].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The realization of activation and intra-layer communication is carried out by off-chip field-programmed gate array (FPGA). Recently, this field has been rapidly developing toward monolithically integrated memristive neuromorphic systems, even though the memristive analog behavior has not been fully exploited ( Liu et al, 2020 ; Wan et al, 2020 ).…”
Section: Introductionmentioning
confidence: 99%