Abstract:This paper proposes an energy-efficient reconfigurable architecture for deep neural networks (EERA-DNN) with hybrid bitwidth and logarithmic multiplier. To speed up the computing and achieve high energy efficiency, we first propose an efficient network compression method with hybrid bit-width weights scheme, that saves the memory storage of network LeNet, AlexNet and EESEN by 7x-8x with negligible accuracy loss. Then, we propose an approximate unfolded logarithmic multiplier to process the multiplication operations efficiently. Comparing with state-of-the-art architectures EIE and Thinker, this work achieves over 1.8x and 2.7x better in energy efficiency respectively.
This paper proposes an Energy-Efficient Reconfigurable Architecture (E-ERA) for Recurrent Neural Networks (RNNs). In E-ERA, reconfigurable computing arrays with approximate multipliers and dynamically adaptive accuracy controlling mechanism are implemented to achieve high energy efficiency. The E-ERA prototype is implemented on TSMC 45 nm process. Experimental results show that, comparing with traditional designs, the power consumption of E-ERA is reduced by 28.6%∼52.3%, with only 5.3%∼9.2% loss in accuracy. Compared with state-of-the-art architectures, E-ERA outperforms up to 1.78X in power efficiency and can achieve 304 GOPS/W when processing RNNs for speech recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.