2019
DOI: 10.48550/arxiv.1906.09395
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adaptive Precision CNN Accelerator Using Radix-X Parallel Connected Memristor Crossbars

Abstract: Neural processor development is reducing our reliance on remote server access to process deep learning operations in an increasingly edge-driven world. By employing inmemory processing, parallelization techniques, and algorithmhardware co-design, memristor crossbar arrays are known to efficiently compute large scale matrix-vector multiplications. However, state-of-the-art implementations of negative weights require duplicative column wires, and high precision weights using single-bit memristors further distrib… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 33 publications
0
6
0
Order By: Relevance
“…Feasible methods include component shutdown, input filtering, early exit, and results caching. This concept can be likened to block-wise dropout [135,136]. Additionally, random gossip communication can help reduce unnecessary calculations and model updates.…”
Section: • Conditional Computationmentioning
confidence: 99%
“…Feasible methods include component shutdown, input filtering, early exit, and results caching. This concept can be likened to block-wise dropout [135,136]. Additionally, random gossip communication can help reduce unnecessary calculations and model updates.…”
Section: • Conditional Computationmentioning
confidence: 99%
“…Remarkably, this type of neural network can approach near-state-of-the-art performance on vision tasks [5]. One particularly investigated lead is to fabricate hardware BNNs with emerging memories such as resistive RAM or memristors [6]- [13]. The low memory requirements of BNNs, as well as their reliance on simple arithmetic operations, make them indeed particularly adapted for "in-memory" or "near-memory" computing approaches, which achieve superior energy-efficiency by avoiding the von-Neumann bottleneck entirely.…”
Section: Introductionmentioning
confidence: 99%
“…In particular, multiple groups investigate the implementation of BNN inference with resistive memory tightly integrated at the core of CMOS [6]- [13]. Usually, resistive memory stores the synaptic weights W ji .…”
Section: Introductionmentioning
confidence: 99%
“…Remarkably, this type of neural network can achieve high accuracy on vision tasks [5]. One particularly investigated lead is to fabricate hardware BNNs with emerging memories such as resistive RAM or memristors [6]- [13]. The low memory requirements of BNNs, as well as their reliance on simple arithmetic operations, make them indeed particularly adapted for "in-memory" or "near-memory" computing approaches, which achieve superior energy-efficiency by avoiding the von Neumann bottleneck entirely.…”
Section: Introductionmentioning
confidence: 99%
“…Due to their reduced memory requirements, and reliance on simple arithmetic operations, BNNs are especially appropriate for in-or near-memory implementations. In particular, multiple groups investigate the implementation of BNN inference with resistive memory tightly integrated at the core of CMOS [6]- [13].…”
Section: Introductionmentioning
confidence: 99%