Consistency is an extension to generalized synchronization which quantifies the degree of functional dependency of a driven nonlinear system to its input. We apply this concept to echo-state networks, which are an artificial-neural network version of reservoir computing. Through a replica test we measure the consistency levels of the high-dimensional response, yielding a comprehensive portrait of the echo-state property.When a nonlinear dynamical system is externally modulated by an information-carrying signal, its erratic response hides an intricate property: Consistency. It is difficult to estimate from time series whether or not the variability in the output is entirely determined by the driving signal. For autonomous chaotic systems it is well-known that their inherent instability gives rise to a certain level of unpredictability. For a driven system, this means that a part of the variability of its output does not depend on the drive. Consistency quantifies the degree of this dependency through a replica test. The nonlinear system is repeatedly driven by the same signal, and the corresponding responses are compared. We apply this concept to echo-state networks, a class of artificial neural networks with a fixed random internal connectivity. Such networks have been successfully utilized for sequential processing tasks like nonlinear time series prediction and spoken digit recognition. Studying the consistency property allows for a more comprehensive understanding of the dynamical response and for tailoring the network systematically towards enhanced functionality and a wider range of applications.
We examine the use of recurrence networks in studying non-linear deterministic dynamical systems. Specifically, we focus on the case of k-nearest neighbour networks, which have already been shown to contain meaningful (and more importantly, easily accessible) information about dynamics. Superfamily phenomena have previously been identified, although a complete explanation for its appearance was not provided. Local dimension of the attractor is presented as one possible determinant, discussing the ability of specific motifs to be embedded in various dimensions. In turn, the Lyapunov spectrum provides the link between attractor dimension and dynamics required. We also prove invertibility of k-nearest neighbour networks. A new metric is provided, under which the k-nearest neighbour and ϵ-recurrence construction methods produce identical networks. Hence, the already established ϵ-recurrence inversion algorithm applies equally to the k-nearest neighbour case, and inversion is proved. The change in metric necessarily distorts the shape of the reconstructed attractor, although topology is conserved.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.