“…The peak performance in terms of operations per second 2 of one CHIPMUNK chip is 32.2 Gop/s (at 1.24 V) and the peak energy efficiency (3.08 Gop/s/mW) is reached at 0.75 V. Table 1 compares architectural parameters and synthetic results between CHIPMUNK and the existing VLSI and FPGA-based implementations for which performance and energy numbers have been published. Our work reaches comparable performance with the DNPU proposed by Shin et al [14]. Performance is obviously below that claimed by Google TPU [10], but this is mostly due to the different size.…”
Section: Silicon Prototype and Comparison With State-of-the-artsupporting
Recurrent neural networks (RNNs) are state-of-the-art in voice awareness/understanding and speech recognition. On-device computation of RNNs on low-power mobile and wearable devices would be key to applications such as zero-latency voice-based human-machine interfaces. Here we present CHIP-MUNK, a small (<1 mm 2 ) hardware accelerator for Long-Short Term Memory RNNs in UMC 65 nm technology capable to operate at a measured peak efficiency up to 3.08 Gop/s/mW at 1.24 mW peak power. To implement big RNN models without incurring in huge memory transfer overhead, multiple CHIPMUNK engines can cooperate to form a single systolic array. In this way, the CHIPMUNK architecture in a 75 tiles configuration can achieve real-time phoneme extraction on a demanding RNN topology proposed in [1], consuming less than 13 mW of average power.
“…The peak performance in terms of operations per second 2 of one CHIPMUNK chip is 32.2 Gop/s (at 1.24 V) and the peak energy efficiency (3.08 Gop/s/mW) is reached at 0.75 V. Table 1 compares architectural parameters and synthetic results between CHIPMUNK and the existing VLSI and FPGA-based implementations for which performance and energy numbers have been published. Our work reaches comparable performance with the DNPU proposed by Shin et al [14]. Performance is obviously below that claimed by Google TPU [10], but this is mostly due to the different size.…”
Section: Silicon Prototype and Comparison With State-of-the-artsupporting
Recurrent neural networks (RNNs) are state-of-the-art in voice awareness/understanding and speech recognition. On-device computation of RNNs on low-power mobile and wearable devices would be key to applications such as zero-latency voice-based human-machine interfaces. Here we present CHIP-MUNK, a small (<1 mm 2 ) hardware accelerator for Long-Short Term Memory RNNs in UMC 65 nm technology capable to operate at a measured peak efficiency up to 3.08 Gop/s/mW at 1.24 mW peak power. To implement big RNN models without incurring in huge memory transfer overhead, multiple CHIPMUNK engines can cooperate to form a single systolic array. In this way, the CHIPMUNK architecture in a 75 tiles configuration can achieve real-time phoneme extraction on a demanding RNN topology proposed in [1], consuming less than 13 mW of average power.
“…Both implementations in [22] and [20] have a higher power efficiency than Nullhop, but provide consistently lower performances (<350 GOp/s) using more MAC units. They also require a larger area (16 mm 2 ), but this is justified by their support for Recurrent Neural Networks and variable bit precision.…”
Section: Memory Power Consumption Estimationmentioning
Convolutional neural networks (CNNs) have become the dominant neural network architecture for solving many state-of-the-art (SOA) visual processing tasks. Even though graphical processing units are most often used in training and deploying CNNs, their power efficiency is less than 10 GOp/s/W for single-frame runtime inference. We propose a flexible and efficient CNN accelerator architecture called NullHop that implements SOA CNNs useful for low-power and low-latency application scenarios. NullHop exploits the sparsity of neuron activations in CNNs to accelerate the computation and reduce memory requirements. The flexible architecture allows high utilization of available computing resources across kernel sizes ranging from 1x1 to 7x7. NullHop can process up to 128 input and 128 output feature maps per layer in a single pass. We implemented the proposed architecture on a Xilinx Zynq field-programmable gate array (FPGA) platform and presented the results showing how our implementation reduces external memory transfers and compute time in five different CNNs ranging from small ones up to the widely known large VGG16 and VGG19 CNNs. Postsynthesis simulations using Mentor Modelsim in a 28-nm process with a clock frequency of 500 MHz show that the VGG19 network achieves over 450 GOp/s. By exploiting sparsity, NullHop achieves an efficiency of 368%, maintains over 98% utilization of the multiply-accumulate units, and achieves a power efficiency of over 3 TOp/s/W in a core area of 6.3 mm₂. As further proof of NullHop's usability, we interfaced its FPGA implementation with a neuromorphic event camera for real-time interactive demonstrations.
“…The first type focuses on the traditional ANNs. They are custom architectures [11,12,15,21,23,24,38,39,46,47,56,57,59,61,64,67,68] to accelerate mature ANN models. We usually call this type NN accelerators.…”
Section: Nn Chipsmentioning
confidence: 99%
“…, 2 N −1 −1 2 P } where P represents the point position. This method is used by DNPU [61], Strip [38], TianJi-ANN [60], etc. • Fraction encoding:…”
Different from developing neural networks (NNs) for general-purpose processors, the development for NN chips usually faces with some hardware-specific restrictions, such as limited precision of network signals and parameters, constrained computation scale, and limited types of non-linear functions.This paper proposes a general methodology to address the challenges. We decouple the NN applications from the target hardware by introducing a compiler that can transform an existing trained, unrestricted NN into an equivalent network that meets the given hardware's constraints. We propose multiple techniques to make the transformation adaptable to different kinds of NN chips, and reliable for restrict hardware constraints.We have built such a software tool that supports both spiking neural networks (SNNs) and traditional artificial neural networks (ANNs). We have demonstrated its effectiveness with a fabricated neuromorphic chip and a processing-in-memory (PIM) design. Tests show that the inference error caused by this solution is insignificant and the transformation time is much shorter than the retraining time. Also, we have studied the parameter-sensitivity evaluations to explore the tradeoffs between network error and resource utilization for different transformation strategies, which could provide insights for co-design optimization of neuromorphic hardware and software.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.