On-chip edge intelligence has necessitated the exploration of algorithmic techniques to reduce the compute requirements of current machine learning frameworks. This work aims to bridge the recent algorithmic progress in training Binary Neural Networks and Spiking Neural Networks-both of which are driven by the same motivation and yet synergies between the two have not been fully explored. We show that training Spiking Neural Networks in the extreme quantization regime results in near full precision accuracies on large-scale datasets like CIFAR-100 and ImageNet. An important implication of this work is that Binary Spiking Neural Networks can be enabled by "In-Memory" hardware accelerators catered for Binary Neural Networks without suffering any accuracy degradation due to binarization. We utilize standard training techniques for non-spiking networks to generate our spiking networks by conversion process and also perform an extensive empirical analysis and explore simple design-time and run-time optimization techniques for reducing inference latency of spiking networks (both for binary and full-precision models) by an order of magnitude over prior work. Our implementation source code and trained models are available at https://github.com/NeuroCompLab-psu/SNN-Conversion.
Probabilistic machine learning enabled by the Bayesian formulation has recently gained significant attention in the domain of automated reasoning and decision-making. While impressive strides have been made recently to scale up the performance of deep Bayesian neural networks, they have been primarily standalone software efforts without any regard to the underlying hardware implementation. In this paper, we propose an "All-Spin" Bayesian Neural Network where the underlying spintronic hardware provides a better match to the Bayesian computing models. To the best of our knowledge, this is the first exploration of a Bayesian neural hardware accelerator enabled by emerging post-CMOS technologies. We develop an experimentally calibrated device-circuit-algorithm co-simulation framework and demonstrate 24× reduction in energy consumption against an iso-network CMOS baseline implementation.
Feedforward control has been widely used to improve the tracking performance of precision motion systems. This paper develops a new data-driven feedforward tuning approach associated with rational basis functions. The aim is to obtain the global optimum with optimal estimation accuracy. First, the instrumental variable is employed to ensure the unbiased estimation of the global optimum. Then, the optimal instrumental variable which leads to the highest estimation accuracy is derived, and a new refined instrumental variable method is exploited to estimate the optimal instrumental variable. Moreover, the estimation accuracy of the optimal parameter is further improved through the proposed parameter updating law. Simulations are conducted to test the parameter estimation accuracy of the proposed approach, and it is demonstrated that the global optimum is unbiasedly estimated with optimal parameter estimation accuracy in terms of variance with the proposed approach. Experiments are performed and the results validate the excellent performance of the proposed approach for varying tasks. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.