Networks in machine learning offer examples of complex high-dimensional dynamical systems inspired by and reminiscent of biological systems. Here, we study the learning dynamics of generalized Hopfield networks, which permit visualization of internal memories. These networks have been shown to proceed through a “feature-to-prototype” transition, as the strength of network nonlinearity is increased, wherein the learned, or terminal, states of internal memories transition from mixed to pure states. Focusing on the prototype learning dynamics of the internal memories, we observe stereotypical dynamics of memories wherein similar subgroups of memories sequentially split at well-defined saddles. The splitting order is interpretable and reproducible from one simulation to the other. The dynamics prior to splits are robust to variations in many features of the system. To develop a more rigorous understanding of these global dynamics, we study smaller subsystems that exhibit similar properties to the full system. Within these smaller systems, we combine analytical calculations with numerical simulations to study the dynamics of the feature-to-prototype transition, and the emergence of saddle points in the learning landscape. We exhibit regimes where saddles appear and disappear through saddle-node bifurcations, qualitatively changing the distribution of learned memories as the strength of the nonlinearity is varied—allowing us to systematically investigate the mechanisms that underlie the emergence of the learning dynamics. Several features of the learning dynamics are reminiscent of the Waddington's caricature of cellular differentiation, and we attempt to make this analogy more precise. Memories can thus differentiate in a predictive and controlled way, revealing bridges between experimental biology, dynamical systems theory, and machine learning.
Published by the American Physical Society
2024