Fabien Alibart received a Ph.D. in material science from University of Picardie Jules Verne, France, in 2008. He joined IEMN-CNRS in 2012 as a permanent researcher where he worked on the concepts of neuromorphic/bioinspired computing with emerging memory technologies. He is now with LN2-3IT CNRS as a researcher participating in the join laboratory program between France and Quebec (UMI-CNRS) where he is developing neuromorphic hardware for a variety of applications, from edge computing to brain-machine interfaces.
A key challenge in scaling quantum computers is the calibration and control of multiple qubits. In solid-state quantum dots (QDs), the gate voltages required to stabilize quantized charges are unique for each individual qubit, resulting in a high-dimensional control parameter space that must be tuned automatically. Machine learning techniques are capable of processing high-dimensional data—provided that an appropriate training set is available—and have been successfully used for autotuning in the past. In this paper, we develop extremely small feed-forward neural networks that can be used to detect charge-state transitions in QD stability diagrams. We demonstrate that these neural networks can be trained on synthetic data produced by computer simulations, and robustly transferred to the task of tuning an experimental device into a desired charge state. The neural networks required for this task are sufficiently small as to enable an implementation in existing memristor crossbar arrays in the near future. This opens up the possibility of miniaturizing powerful control elements on low-power hardware, a significant step towards on-chip autotuning in future QD computers.
Novel computing architectures based on resistive switching memories (also known as memristors or RRAMs) have been shown to be promising approaches for tackling the energy inefficiency of deep learning and spiking neural networks. However, resistive switch technology is immature and suffers from numerous imperfections, which are often considered limitations on implementations of artificial neural networks. Nevertheless, a reasonable amount of variability can be harnessed to implement efficient probabilistic or approximate computing. This approach turns out to improve robustness, decrease overfitting and reduce energy consumption for specific applications, such as Bayesian and spiking neural networks. Thus, certain non-idealities could become opportunities if we adapt machine learning methods to the intrinsic characteristics of resistive switching memories. In this short review, we introduce some key considerations for circuit design and the most common non-idealities. We illustrate the possible benefits of stochasticity and compression with examples of well-established software methods. We then present an overview of recent neural network implementations that exploit the imperfections of resistive switching memory, and discuss the potential and limitations of these approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.