2018 International Joint Conference on Neural Networks (IJCNN) 2018
DOI: 10.1109/ijcnn.2018.8489619
|View full text |Cite
|
Sign up to set email alerts
|

Scalable NoC-based Neuromorphic Hardware Learning and Inference

Abstract: Bio-inspired neuromorphic hardware is a research direction to approach brain's computational power and energy efficiency. Spiking neural networks (SNN) encode information as sparsely distributed spike trains and employ spike-timingdependent plasticity (STDP) mechanism for learning. Existing hardware implementations of SNN are limited in scale or do not have in-hardware learning capability. In this work, we propose a low-cost scalable Network-on-Chip (NoC) based SNN hardware architecture with fully distributed … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 23 publications
(2 citation statements)
references
References 42 publications
0
2
0
Order By: Relevance
“…With the advent of reconfigurable processors and software defined networking [1], there have been many architectures [2][6] that provide flexibility and energy efficiency in network systems. Recently, Network-on-Chip (NoC) systems have been established as the dominant communication method in multicore processors, mostly because of their increased scalability [7][9]. However, until now, there has been limited availability of networked-IC solutions to address the needs for programmable metamaterials and programmable metasurfaces (MSFs).…”
Section: Introductionmentioning
confidence: 99%
“…With the advent of reconfigurable processors and software defined networking [1], there have been many architectures [2][6] that provide flexibility and energy efficiency in network systems. Recently, Network-on-Chip (NoC) systems have been established as the dominant communication method in multicore processors, mostly because of their increased scalability [7][9]. However, until now, there has been limited availability of networked-IC solutions to address the needs for programmable metamaterials and programmable metasurfaces (MSFs).…”
Section: Introductionmentioning
confidence: 99%
“…To achieve high performance and energy efficiency, hardware acceleration of DNNs is intensively studied both in academia and industry [2][3][4][5][6][7][8][9]. DNN model compression techniques, including weight pruning [10][11][12][13][14][15] and weight quantization [16][17][18], are developed to facilitate hardware acceleration by reducing storage/computation in DNN inference with negligible impact on accuracy.…”
Section: Introductionmentioning
confidence: 99%