Abstract:We propose new measures of shared information, unique information and synergistic information that can be used to decompose the mutual information of a pair of random variables (Y, Z) with a third random variable X. Our measures are motivated by an operational idea of unique information, which suggests that shared information and unique information should depend only on the marginal distributions of the pairs (X, Y ) and (X, Z). Although this invariance property has not been studied before, it is satisfied by other proposed measures of shared information. The invariance property does not uniquely determine our new measures, but it implies that the functions that we define are bounds to any other measures satisfying the same invariance property. We study properties of our measures and compare them to other candidate measures.
How can the information that a set {X1, . . . , Xn} of random variables contains about another random variable S be decomposed? To what extent do different subgroups provide the same, i.e. shared or redundant, information, carry unique information or interact for the emergence of synergistic information? Recently Williams and Beer proposed such a decomposition based on natural properties for shared information. While these properties fix the structure of the decomposition, they do not uniquely specify the values of the different terms. Therefore, we investigate additional properties such as strong symmetry and left monotonicity. We find that strong symmetry is incompatible with the properties proposed by Williams and Beer. Although left monotonicity is a very natural property for an information measure it is not fulfilled by any of the proposed measures. We also study a geometric framework for information decompositions and ask whether it is possible to represent shared information by a family of posterior distributions. Finally, we draw connections to the notions of shared knowledge and common knowledge in game theory. While many people believe that independent variables cannot share information, we show that in game theory independent agents can have shared knowledge, but not common knowledge. We conclude that intuition and heuristic arguments do not suffice when arguing about information.
Sleep encompasses approximately a third of our lifetime, yet its purpose and biological function are not well understood. Without sleep optimal brain functioning such as responsiveness to stimuli, information processing, or learning may be impaired. Such observations suggest that sleep plays a crucial role in organizing or reorganizing neuronal networks of the brain toward states where information processing is optimized.Increasing evidence suggests that cortical neuronal networks operate near a critical state characterized by balanced activity patterns, which supports optimal information processing. However, it remains unknown whether critical dynamics is affected in the course of wake and sleep, which would also impact information processing. Here, we show that signatures of criticality are progressively disturbed during wake and restored by sleep. We demonstrate that the precise power-laws governing the cascading activity of neuronal avalanches and the distribution of phase-lock intervals in human electroencephalographic recordings are increasingly disarranged during sustained wakefulness. These changes are accompanied by a decrease in variability of synchronization. Interpreted in the context of a critical branching process, these seemingly different findings indicate a decline of balanced activity and progressive distance from criticality toward states characterized by an imbalance toward excitation where larger events prevail dynamics. Conversely, sleep restores the critical state resulting in recovered power-law characteristics in activity and variability of synchronization. These findings support the intriguing hypothesis that sleep may be important to reorganize cortical network dynamics to a critical state thereby assuring optimal computational capabilities for the following time awake.
Abstract.Measures of complexity are of immediate interest for the field of autonomous robots both as a means to classify the behavior and as an objective function for the autonomous development of robot behavior. In the present paper we consider predictive information in sensor space as a measure for the behavioral complexity of a two-wheel embodied robot moving in a rectangular arena with several obstacles. The mutual information (MI) between past and future sensor values is found empirically to have a maximum for a behavior which is both explorative and sensitive to the environment. This makes predictive information a prospective candidate as an objective function for the autonomous development of such behaviors. We derive theoretical expressions for the MI in order to obtain an explicit update rule for the gradient ascent dynamics. Interestingly, in the case of a linear or linearized model of the sensorimotor dynamics the structure of the learning rule derived depends only on the dynamical properties while the value of the MI influences only the learning rate. In this way the problem of the prohibitively large sampling times for information theoretic measures can be circumvented. This result can be generalized and may help to derive explicit learning rules from complexity theoretic measures. PACS
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.