<p>Neural Networks (NNs) provide an effective solution in numerous application domains, including autonomous driving and medical applications. Nevertheless, NN predictions can be incorrect if the input sample is outside of the training distribution or contaminated by noise. Consequently, quantifying the uncertainty of the NN prediction allows the system to make more insightful decisions by avoiding blind predictions. Therefore, uncertainty quantification is crucial for a variety of applications, including safety-critical applications. Bayesian NN (BayNN) using Dropout-based approximation provides systematic approach for estimating uncertainty of predictions. Despite such merit, BayNNs are not suitable for implementation in a embedded device or able to meet high-performance demands for certain applications. Computation in-memory (CiM) architecture with emerging non-volatile memories (NVMs) are a great candidate for high performance and low power acceleration BayNNs in hardware. Among NVMs, Magnetic Tunnel Junction (MTJ) offer many benefits, but they also suffer from various non-idealities and limited bit-level resolution. As a result, binarizing BayNNs is an attractive option that can directly implement BayNN into a CiM architecture and able to achieve benefits of both CiM architecture and BayNNs at the same time. Conventional in- memory hardware implementations emphasize conventional NNs, which can only make predictions, does not account for both device and input uncertainty, thus, reducing both reliability and performance. In this paper, we propose for the first time Binary Bayesian NNs (BinBayNN) with an end to end approach (from algorithmic level to device level) for their implementation. Our approach takes the inherent stochastic properties of MTJs as a feature to implement Dropout-based Bayesian Neural Networks. We provide an extensive evaluation of our approach from the device level up to the algorithmic level on various benchmark datasets.</p>
Convolutional Neural Network (CNN) is one of the most important Deep Neural Networks (DNN) classes that helps solving many tasks related to image recognition and computer vision. Their classical implementations by using conventional CMOS technologies and digital design techniques are still considered very energy-consuming. Floating point CNN relies primarily on MAC (Multiply and ACcumulate) operation.Recently, cost-effective Bite-wise CNN based on XNOR and bit-counting operations have been considered as a possible hardware implementation candidate. However, the Von-Neumann bottleneck due to intensive data fetching between memory and the computing core limits their scalability on hardware. XNOR-BITCOUNT operations can be easily implemented by using In Memory Computing (IMC) paradigms executed on a memristive crossbar array. Among emerging memristive devices, the Spin-Orbit Torque Magnetic Random Access Memory (SOT-MRAM) offers the possibility to have a higher ON resistance that allows reducing the reading current, since all the crossbar array is read in parallel. This could contribute to a further reduction of energy consumption, paving the way for much bigger crossbar designs. This study presents a crossbar architecture based on SOT-MRAM with very low energy consumption; we study the impact of process variability on the synaptic weights and perform Monte-Carlo simulations of the overall crossbar array to evaluate the error rate. Simulation results show that this implementation has lower energy consumption with respect to other memristive solutions with 65.89 fJ per read operation. The design is also quite robust to process variations, with very low reading inaccuracies up to 10 %.
Neural Networks (NNs) provide an effective solution in numerous application domains, including autonomous driving and medical applications. Nevertheless, NN predictions can be incorrect if the input sample is outside of the training distribution or contaminated by noise. Consequently, quantifying the uncertainty of the NN prediction allows the system to make more insightful decisions by avoiding blind predictions. Therefore, uncertainty quantification is crucial for a variety of applications, including safety-critical applications. Bayesian NN (BayNN) using Dropout-based approximation provides sytametic appraoch for estimating uncertainty of predictions. Despite such merit, BayNNs are not suitable for implementation in a embedded device or able to meet high-performance demands for certain applications. Computation in-memory (CiM) architecure with emerging non-volatile memories (NVMs) are a great candidate for high performance and low power acceleration BayNNs in hardware. Among NVMs, Magnetic Tunnel Junction (MTJ) offer many benefits, but they also suffer from various non-idealities and limited bit-level resolution. As a result, binarizing BayNNs is an attractive option that can directly implement BayNN into a CiM architecture and able to achieve benefits of both CiM architecture and BayNNs at the same time. Conventional inmemory hardware implementations emphasize conventional NNs, which can only make predictions, does not account for both device and input uncertainty, thus, reducing both reliability and performance. In this paper, we propose for the first time Binary Bayesian NNs (BinBayNN) with an end to end approach (from algorithmic level to device level) for their implementation. Our approach takes the inherent stochastic properties of MTJs as a feature to implement Dropout-based Bayesian Neural Networks. We provide an extensive evaluation of our approach from the device level up to the algorithmic level on various benchmark datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.