Recent breakthroughs in deep neural networks have led to the proliferation of its use in image and speech applications. Conventional deep neural networks (DNNs) are fully-connected multi-layer networks with hundreds or thousands of neurons in each layer. Such a network requires a very large weight memory to store the connectivity between neurons. In this paper, we propose a hardware-centric methodology to design low power neural networks with significantly smaller memory footprint and computation resource requirements. We achieve this by judiciously dropping connections in large blocks of weights. The corresponding technique, termed coarse-grain sparsification (CGS), introduces hardware-aware sparsity during the DNN training, which leads to efficient weight memory compression and significant computation reduction during classification without losing accuracy. We apply the proposed approach to DNN design for keyword detection and speech recognition. When the two DNNs are trained with 75% of the weights dropped and classified with 5-6 bit weight precision, the weight memory requirement is reduced by 95% compared to their fully-connected counterparts with double precision, while maintaining similar performance in keyword detection accuracy, word error rate, and sentence error rate. To validate this technique in real hardware, a time-multiplexed architecture using a shared multiply and accumulate (MAC) engine was implemented in 65nm and 40nm low power (LP) CMOS. In 40nm at 0.6V, the keyword detection network consumes 7µW and the speech recognition network consumes 103µW, making this technique highly suitable for mobile and wearable devices.
A predictive method based on Artificial networks has been developed for the thermophysical properties of binary liquid mixtures at (303.15, 313.15 and 323.15) K. In method 1, a committee ANN was trained using 5 physical properties combined with absolute temperature as its input to predict thermo physical properties of liquid mixtures. Using these data, predicted values were determined for intermediate mole fraction of different systems without conducting experiments. In method 2, a committee ANN was trained using mole fraction and molecular weight as its input to predict the thermo physical properties of liquid mixtures. The five physical properties of five binary mixtures were taken for this study along with their molecular weights. ANN with back-propagation algorithm is proposed, for Multi-pass Turning Operation and developed in MATLAB. Compared to other prediction techniques, the proposed ANN approach is highly accurate and error is <1%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.