2021 IEEE/ACM International Conference on Computer Aided Design (ICCAD) 2021
DOI: 10.1109/iccad51958.2021.9643474
|View full text |Cite
|
Sign up to set email alerts
|

Automated Generation of Integrated Digital and Spiking Neuromorphic Machine Learning Accelerators

Abstract: The growing numbers of application areas for artificial intelligence (AI) methods have led to an explosion in availability of domain-specific accelerators, which struggle to support every new machine learning (ML) algorithm advancement, clearly highlighting the need for a tool to quickly and automatically transition from algorithm definition to hardware implementation and explore the design space along a variety of SWaP (size, weight and Power) metrics. The software defined architectures (SODA) synthesizer imp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3
1

Relationship

5
5

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 43 publications
0
7
0
Order By: Relevance
“…While traditional von Neumann architectures have one or more central processing units physically separated from the main memory, neuromorphic architectures exploit the co-localization of memory and compute, near and in-memory computation [18]. Simultaneously to the tremendous progress in devising novel neuromorphic computing architectures, there has been many recent works that address how to map and compile (trained) SNNs models for efficient execution in neuromorphic hardware [19][20][21][22][23][24][25][26][27][28][29][30][31].…”
Section: Introductionmentioning
confidence: 99%
“…While traditional von Neumann architectures have one or more central processing units physically separated from the main memory, neuromorphic architectures exploit the co-localization of memory and compute, near and in-memory computation [18]. Simultaneously to the tremendous progress in devising novel neuromorphic computing architectures, there has been many recent works that address how to map and compile (trained) SNNs models for efficient execution in neuromorphic hardware [19][20][21][22][23][24][25][26][27][28][29][30][31].…”
Section: Introductionmentioning
confidence: 99%
“…In [139], Curzel et al propose an automated framework called SODASNN to synthesize a hybrid neuromorphic architecture consisting of digital and analog components. The framework consists of the software defined architecture (SODA) synthesizer [140], a novel no-human-in-the-loop hardware generator that automates the creation of machine learning (ML) accelerators from high-level ML language.…”
Section: System Software For Performance and Energy Optimizationmentioning
confidence: 99%
“…PSOPART [27] minimizes spike latency on the shared interconnect, SpiNeMap [9] minimizes interconnect energy, DFSynthesizer [82] maximizes throughput, DecomposedSNN [11] maximizes crossbar utilization, EaNC [90] minimizes overall energy of a machine learning task by targeting both computation and communication energy, TaNC [89] minimizes the average temperature of each crossbar, eSpine [91] maximizes NVM endurance in a crossbar, RENEU [80] minimizes the circuit aging in a crossbar's peripheral circuits, and NCil [86] reduces read disturb issues in a crossbar, improving the inference lifetime. Beside these techniques, there are also other software frameworks [1,3,4,6,12,23,25,38,47,50,54,60,71,75,76,78,85,88] and run-time approaches [10,84], addressing one or more of these optimization objectives.…”
Section: Hardware Implementation Of Machine Learning Inferencementioning
confidence: 99%