2019
DOI: 10.3389/fnins.2019.00004
|View full text |Cite
|
Sign up to set email alerts
|

REMODEL: Rethinking Deep CNN Models to Detect and Count on a NeuroSynaptic System

Abstract: In this work, we perform analysis of detection and counting of cars using a low-power IBM TrueNorth Neurosynaptic System. For our evaluation we looked at a publicly-available dataset that has overhead imagery of cars with context present in the image. The trained neural network for image analysis was deployed on the NS16e system using IBM's EEDN training framework. Through multiple experiments we identify the architectural bottlenecks present in TrueNorth system that does not let us deploy large neural network… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 26 publications
(52 reference statements)
0
6
0
Order By: Relevance
“…Several applications demonstrated on neuromorphic hardware have employed some of the aforementioned mapping techniques. Tasks such as keyword spotting, medical image analysis and object detection have been demonstrated to run efficiently on existing platforms such as Intel's Loihi and IBM's TrueNorth [48][49][50] .…”
Section: Box 1 | Spiking Neural Networkmentioning
confidence: 99%
“…Several applications demonstrated on neuromorphic hardware have employed some of the aforementioned mapping techniques. Tasks such as keyword spotting, medical image analysis and object detection have been demonstrated to run efficiently on existing platforms such as Intel's Loihi and IBM's TrueNorth [48][49][50] .…”
Section: Box 1 | Spiking Neural Networkmentioning
confidence: 99%
“…In the DIGITS, two CNNs, AlexNet and GoogLeNet can be used for image classification. It has been shown that GoogLeNet performs better than AlexNet for classification, detection, and counting [48,[58][59][60][61][62][63]. For training the model, we used GoogLeNet with a 22 layer deep CNN, comprising two convolutional layers, two pooling layers (four MAX pools and one AVG pool), and nine "Inception" modules, in which each module has six convolution layers, one pooling layer, and 4 million parameters [64,65]; the deep CNN was implemented using open-source software on the CentOS Linux distribution 7.3.1611.…”
Section: Modelsmentioning
confidence: 99%
“…Here we study both the number of cores required for mapping and the classification accuracy (IMAGENET) for the HFNets (Figure 11) and three other popular CNN architectures, namely VGGNet (VGG-16), MobileNet, and REMODEL [a modification of VGG-16 for mapping the final fully connected layers onto IBM TrueNorth (Shukla et al, 2019)]. We consider two core sizes: the minimum 128 × 256 and the maximum size 1,024 × 1,024.…”
Section: Number Of Cores and Classification Accuracymentioning
confidence: 99%