the modern computers are still largely based on the von Neumann architecture, and designed as a general-purpose machine, which make computers useful and convenient to use, but they are highly inefficient for data intensive tasks. The bus connecting memory and processor becomes a bottleneck for data transfer, referred to as the von Neumann bottleneck. [3] To improve the performance of computing systems in the so-called "big data" era, we must fundamentally change the way we compute today. Instead of being compute-centric, we should transgress to a data-centric paradigm.Neuroscientists and psychologists, all around the world have been studying the functional architecture of the human brain for centuries, which inspired datacentric computing methods such as artificial neural networks (ANNs) and machine learning (ML). The human brain can be characterized by its massive parallel reconfigurable connections (synapses or memory) connecting billions of neurons (the main processing unit). [4] Synapses play a very important role in achieving learning and adaptability of the human brain. The weight of a synapse shows connection strength between the two neurons linked by that synapse. In the learning phase, the synaptic weight changes in an analog fashion based on the learning rules. [5] ML and ANNs use a high-level abstract concept of human cognition and are referred to as brain-inspired computing. To further leverage and exploit the potential advantages and capabilities of the human brain, we may need to more faithfully mimic its functionality on hardware. Emerging devices that can be used for such neuromorphic computing with different levels of brain-inspiration will be our topic of discussion in this paper.In recent years, neuromorphic computing has emerged as a promising technology for the post-Moore's law era. Neuromorphic computing systems are highly connected and parallel, consume relatively low power and process in memory. To implement a neuromorphic system on hardware, it is important to realize: (1) artificial neurons that mimic biological neurons and (2) artificial synapses that emulate biological synapses, both of which must be power-efficient, scalable, and capable of implementing relevant learning rules to facilitate large-scale neuromorphic functions. To this end, over the last few years, numerous efforts have been made to realize artificial synapses using post-CMOS devices, including resistive randomaccess memory (ReRAM) (drift [6] and diffusive [7] memristors), A neuromorphic computing system may be able to learn and perform a task on its own by interacting with its surroundings. Combining such a chip with complementary metal-oxide-semiconductor (CMOS)-based processors can potentially solve a variety of problems being faced by today's artificial intelligence (AI) systems. Although various architectures purely based on CMOS are designed to maximize the computing efficiency of AI-based applications, the most fundamental operations including matrix multiplication and convolution heavily rely on the CMOS-based m...
Neuromorphic computing based on spikes offers great potential in highly efficient computing paradigms. Recently, several hardware implementations of spiking neural networks based on traditional complementary metal-oxide semiconductor technology or memristors have been developed. However, an interface (called an afferent nerve in biology) with the environment, which converts the analog signal from sensors into spikes in spiking neural networks, is yet to be demonstrated. Here we propose and experimentally demonstrate an artificial spiking afferent nerve based on highly reliable NbOx Mott memristors for the first time. The spiking frequency of the afferent nerve is proportional to the stimuli intensity before encountering noxiously high stimuli, and then starts to reduce the spiking frequency at an inflection point. Using this afferent nerve, we further build a power-free spiking mechanoreceptor system with a passive piezoelectric device as the tactile sensor. The experimental results indicate that our afferent nerve is promising for constructing self-aware neurorobotics in the future.
Reservoir computing (RC) is a framework that can extract features from a temporal input into a higher‐dimension feature space. The reservoir is followed by a readout layer that can analyze the extracted features to accomplish tasks such as inference and classification. RC systems inherently exhibit an advantage, since the training is only performed at the readout layer, and therefore they are able to compute complicated temporal data with a low training cost. Herein, a physical reservoir computing system using diffusive memristor‐based reservoir and drift memristor‐based readout layer is experimentally implemented. The rich nonlinear dynamic behavior exhibited by a diffusive memristor due to Ag migration and the robust in situ training of drift memristor arrays makes the combined system ideal for temporal pattern classification. It is then demonstrated experimentally that the RC system can successfully identify handwritten digits from the Modified National Institute of Standards and Technology (MNIST) dataset, achieving an accuracy of 83%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.