Neural networks are computing models that have been leading progress in Machine Learning (ML) and Artificial Intelligence (AI) applications. In parallel, the first small-scale quantum computing devices have become available in recent years, paving the way for the development of a new paradigm in information processing. Here we give an overview of the most recent proposals aimed at bringing together these ongoing revolutions, and particularly at implementing the key functionalities of artificial neural networks on quantum architectures. We highlight the exciting perspectives in this context, and discuss the potential role of near-term quantum hardware in the quest for quantum machine learning advantage.
Artificial neural networks have been proposed as potential algorithms that could benefit from being implemented and run on quantum computers. In particular, they hold promise to greatly enhance Artificial Intelligence tasks, such as image elaboration or pattern recognition. The elementary building block of a neural network is an artificial neuron, i.e. a computational unit performing simple mathematical operations on a set of data in the form of an input vector. Here we show how the design for the implementation of a previously introduced quantum artificial neuron [npj Quant. Inf.
5, 26], which fully exploits the use of superposition states to encode binary valued input data, can be further generalized to accept continuous- instead of discrete-valued input vectors, without increasing the number of qubits. This further step is crucial to allow for a direct application of gradient descent based learning procedures, which would not be compatible with binary-valued data encoding.
We present a model of Continuous Variable Quantum Perceptron (CVQP) whose architecture implements a classical perceptron. The necessary non-linearity is obtained via measuring the output qubit and using the measurement outcome as input to an activation function. The latter is chosen to be the so-called ReLu activation function by virtue of its practical feasibility and the advantages it provides in learning tasks. The encoding of classical data into realistic finitely squeezed states and the use of superposed (entangled) input states for specific binary problems are discussed.
In the last few years, quantum computing and machine learning fostered rapid developments in their respective areas of application, introducing new perspectives on how information processing systems can be realized and programmed. The rapidly growing field of Quantum Machine Learning aims at bringing together these two ongoing revolutions. Here we first review a series of recent works describing the implementation of artificial neurons and feed-forward neural networks on quantum processors. We then present an original realization of efficient individual quantum nodes based on variational unsampling protocols. We investigate different learning strategies involving global and local layer-wise cost functions, and we assess their performances also in the presence of statistical measurement noise. While keeping full compatibility with the overall memory-efficient feed-forward architecture, our constructions effectively reduce the quantum circuit depth required to determine the activation probability of single neurons upon input of the relevant data-encoding quantum states. This suggests a viable approach towards the use of quantum neural networks for pattern classification on near-term quantum hardware.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.