Conventional-silicon-transistor-based Volatile Memory (VM) synapse has been proposed as an alternative to Non Volatile Memory (NVM) synapse in crossbar-array-based neuromorphic/ in-memory-computing systems. Here, through SPICE simulations, we have designed an analog-digital-hybrid Volatile Memory Synapse Cell (VMSC) for such a crossbar array of VM synapses. In our VMSC, the transistor synapse stores nearly analog values of weight. But the other transistors, which carry out the weight update for the transistor synapse, are designed following the principle of static CMOS logic (digital), making our design energy-efficient. Through system-level study, we report classification accuracy, speed, and energy consumption for onchip learning on the VMSC-based crossbar designed here, using popular machine learning data sets. We show that despite a low value of capacitance of our MOSFET synapses (low areafootprint hence), the weights are retained in them long enough for our VMSC-based crossbar to exhibit comparable accuracy as a NVM-synapse-based crossbar.
Recently, automated co-design of machine learning (ML) models and accelerator architectures has attracted significant attention from both the industry and academia. However, most co-design frameworks either explore a limited search space or employ suboptimal exploration techniques for simultaneous design decision investigations of the ML model and the accelerator. Furthermore, training the ML model and simulating the accelerator performance is computationally expensive. To address these limitations, this work proposes a novel neural architecture and hardware accelerator co-design framework, called CODEBench. It comprises two new benchmarking sub-frameworks, CNNBench and AccelBench, which explore expanded design spaces of convolutional neural networks (CNNs) and CNN accelerators. CNNBench leverages an advanced search technique, BOSHNAS, to efficiently train a neural heteroscedastic surrogate model to converge to an optimal CNN architecture by employing second-order gradients. AccelBench performs cycle-accurate simulations for diverse accelerator architectures in a vast design space. With the proposed co-design method, called BOSHCODE, our best CNN-accelerator pair achieves 1.4% higher accuracy on the CIFAR-10 dataset compared to the state-of-the-art pair while enabling 59.1% lower latency and 60.8% lower energy consumption. On the ImageNet dataset, it achieves 3.7% higher Top1 accuracy at 43.8% lower latency and 11.2% lower energy consumption. CODEBench outperforms the state-of-the-art framework, i.e., Auto-NBA, by achieving 1.5% higher accuracy and 34.7 × higher throughput while enabling 11.0 × lower energy-delay product (EDP) and 4.0 × lower chip area on CIFAR-10.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.