Physical reservoir computing is a computational paradigm that enables temporal pattern recognition to be performed directly in physical matter. By exciting non-linear dynamical systems and linearly classifying their changes in state, we can create highly energy-efficient devices capable of solving machine learning tasks without the need to build a modular system consisting of millions of neurons interconnected by synapses. The chosen dynamical system must have three desirable properties: non-linearity, complexity, and fading memory to act as an effective reservoir. We present task agnostic quantitative measures for each of these three requirements and exemplify them for two reservoirs: an echo state network and a simulated magnetic skyrmion-based reservoir. We show that, in general, systems with lower damping reach higher values in all three performance metrics. Whilst for input signal strength, there is a natural trade-off between memory capacity and non-linearity of the reservoir's behaviour. In contrast to typical task-dependent reservoir computing benchmarks, these metrics can be evaluated in parallel from a single input signal, drastically speeding up the parameter search to design efficient and high-performance reservoirs.
Physical reservoir computing is a computational paradigm that enables spatiotemporal pattern recognition to be performed directly in matter. The use of physical matter leads the way toward energy‐efficient devices capable of solving machine learning problems without having to build a system of millions of interconnected neurons. Proposed herein is a high‐performance “skyrmion mixture reservoir” that implements the reservoir computing model with multidimensional inputs. This implementation solves spoken digit classification tasks with an overall model accuracy of 97.4% and a < 1% word error ratethe best performance ever reported for in materio reservoir computers. Due to the quality of the results and the low‐power properties of magnetic texture reservoirs, it is evident that skyrmion fabrics are a compelling candidate for reservoir computing.
Nanomagnetic artificial spin-systems are ideal candidates for neuromorphic hardware. Their passive memory, state-dependent dynamics and nonlinear GHz spin-wave response provide powerful computation. However, any single physical reservoir must trade-off between performance metrics including nonlinearity and memory-capacity, with the compromise typically hard-coded. Here, we present three artificial spin-systems and show how tuning system geometry and dynamics defines computing performance. We engineer networks where each node is a high-dimensional physical reservoir, implementing parallel, deep and multilayer physical neural network architectures. This solves the issue of physical reservoir performance compromise, allowing a small suite of synergistic physical systems to address diverse tasks and provide a broad range of reprogrammable computationally-distinct configurations. These networks outperform any single reservoir across a broad taskset. Crucially, we move beyond reservoir computing to present a method for reconfigurably programming inter-layer network connections, enabling on-demand task optimised performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.