2015
DOI: 10.1109/jssc.2014.2386892
|View full text |Cite
|
Sign up to set email alerts
|

A Sparse Coding Neural Network ASIC With On-Chip Learning for Feature Extraction and Encoding

Abstract: Hardware-based computer vision accelerators will be an essential part of future mobile devices to meet the low power and real-time processing requirement. To realize a high energy efficiency and high throughput, the accelerator architecture can be massively parallelized and tailored to vision processing, which is an advantage over software-based solutions and general-purpose hardware. In this work, we present an ASIC that is designed to learn and extract features from images and videos. The ASIC contains 256 l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
32
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 67 publications
(32 citation statements)
references
References 23 publications
0
32
0
Order By: Relevance
“…This same setup could be utilized in RNNs to make them more biologically realistic. This would let us better understand how the brain learns, and could lead to novel biomimetic technologies: prior work on biologically realistic machine learning algorithms has led to hardware devices that use on-chip learning (Knag et al, 2015;Zylberberg et al, 2011). Synaptically local updates do not have to be coordinated over all parts of the chip, enabling simpler and more efficient hardware implementations.…”
Section: Discussionmentioning
confidence: 99%
“…This same setup could be utilized in RNNs to make them more biologically realistic. This would let us better understand how the brain learns, and could lead to novel biomimetic technologies: prior work on biologically realistic machine learning algorithms has led to hardware devices that use on-chip learning (Knag et al, 2015;Zylberberg et al, 2011). Synaptically local updates do not have to be coordinated over all parts of the chip, enabling simpler and more efficient hardware implementations.…”
Section: Discussionmentioning
confidence: 99%
“…Not only does using fewer outputs result in a worse encoding of the input, but due to the O(N 2 ) scaling properties of these networks, this also favors the power and throughput figures in [27]. As such, subsequent results in this work will be compared with Knag et al, as their chip performs a comparable amount of work to ours [25]. Like the LCA, SAILnet uses a direct inhibitory weight between each pair of output neurons, yielding a scaling complexity of O(N 2 ).…”
Section: Related Workmentioning
confidence: 88%
“…ASICs using this architecture have been studied, with a substantial reduction in power compared to the approach presented by Shapero et al [16]. Knag et al [25] were capable of using the SAILnet architecture to process images using only 48 pJ/input for their inference logic with a throughput of 0.55 MOps/s, or using 176 pJ/input with a throughput of 4.8 MOps/s, 120 × as fast as Shapero et al [16]. Their design was CMOS-based, and utilized a decreased resolution for weight storage: 4 bits per excitatory or inhibitory weight.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…1(b). Spike-based online learning is an active research area, both in the development of new rules for high-accuracy learning in multi-layer networks (e.g., [9]- [12]) and in the demonstration of silicon implementations in applications such arXiv:1904.08513v2 [cs.NE] 16 Jul 2019 as unsupervised learning for image denoising and reconstruction [13], [14]. However, these approaches currently rely on multi-bit weights.…”
mentioning
confidence: 99%