The brain is a complex organ characterized by heterogeneous patterns of structural connections supporting unparalleled feats of cognition and a wide range of behaviors. New noninvasive imaging techniques now allow these patterns to be carefully and comprehensively mapped in individual humans and animals. Yet, it remains a fundamental challenge to understand how the brain's structural wiring supports cognitive processes, with major implications for the personalized treatment of mental health disorders. Here, we review recent efforts to meet this challenge that draw on intuitions, models, and theories from physics, spanning the domains of statistical mechanics, information theory, and dynamical systems and control. We begin by considering the organizing principles of brain network architecture instantiated in structural wiring under constraints of symmetry, spatial embedding, and energy minimization. We next consider models of brain network function that stipulate how neural activity propagates along these structural connections, producing the long-range interactions and collective dynamics that support a rich repertoire of system functions. Finally, we consider perturbative experiments and models for brain network control, which leverage the physics of signal transmission along structural wires to infer intrinsic control processes that support goal-directed behavior and to inform stimulation-based therapies for neurological disease and psychiatric disorders. Throughout, we highlight several open questions in the physics of brain network structure, function, and control that will require creative efforts from physicists willing to brave the complexities of living matter. resents 36 , whether it be a brain, a granular material 306 , or a quantum lattice 307 . By far the simplest network model is represented by a binary undirected graph in which identical nodes represent system components and identical edges indicate relations or connections between pairs of nodes (see the figure). Such a network can be encoded in an adjacency matrix A, where each element A ij indicates the strength of connectivity between nodes i and j. When all edge strengths are unity, the network is said to be binary. When edges have a range of weights, the network represented by the adjacency matrix is said to be weighted. When A = A , the network is undirected; otherwise, the network is directed.One can extend this simple encoding to study multilayer, multislice, and multiplex networks 308 ; dynamic or temporal networks 127, 309 ; annotated networks 310 ; hypergraphs 311 ; and simplicial complexes 230 . One can also calculate various statistics to quantify the architecture of a network and to infer the function thereof (see figure). Intuitively, these statistics range from measures of the local structure in the network, which depend solely on the links directly emanating from a given node (e.g., degree and clustering), to measures of the network's global structure, which depend on the complex pattern of interconnections between all nodes (e.g.,...
Humans are adept at uncovering abstract associations in the world around them, yet the underlying mechanisms remain poorly understood. Intuitively, learning the higher-order structure of statistical relationships should involve complex mental processes. Here we propose an alternative perspective: that higher-order associations instead arise from natural errors in learning and memory. Using the free energy principle, which bridges information theory and Bayesian inference, we derive a maximum entropy model of people's internal representations of the transitions between stimuli. Importantly, our model (i) affords a concise analytic form, (ii) qualitatively explains the effects of transition network structure on human expectations, and (iii) quantitatively predicts human reaction times in probabilistic sequential motor tasks. Together, these results suggest that mental errors influence our abstract representations of the world in significant and predictable ways, with direct implications for the study and design of optimally learnable information sources.
Living systems break detailed balance at small scales, consuming energy and producing entropy in the environment to perform molecular and cellular functions. However, it remains unclear how broken detailed balance manifests at macroscopic scales and how such dynamics support higher-order biological functions. Here we present a framework to quantify broken detailed balance by measuring entropy production in macroscopic systems. We apply our method to the human brain, an organ whose immense metabolic consumption drives a diverse range of cognitive functions. Using whole-brain imaging data, we demonstrate that the brain nearly obeys detailed balance when at rest, but strongly breaks detailed balance when performing physically and cognitively demanding tasks. Using a dynamic Ising model, we show that these large-scale violations of detailed balance can emerge from fine-scale asymmetries in the interactions between elements, a known feature of neural systems. Together, these results suggest that violations of detailed balance are vital for cognition and provide a general tool for quantifying entropy production in macroscopic systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.