How cognitive neural systems process information is largely unknown, in part because of how difficult it is to accurately follow the flow of information from sensors via neurons to actuators. Measuring the flow of information is different from measuring correlations between firing neurons, for which several measures are available, foremost among them the Shannon information, which is an undirected measure. Several information-theoretic notions of “directed information” have been used to successfully detect the flow of information in some systems, in particular in the neuroscience community. However, recent work has shown that directed information measures such as transfer entropy can sometimes inadequately estimate information flow, or even fail to identify manifest directed influences, especially if neurons contribute in a cryptographic manner to influence the effector neuron. Because it is unclear how often such cryptic influences emerge in cognitive systems, the usefulness of transfer entropy measures to reconstruct information flow is unknown. Here, we test how often cryptographic logic emerges in an evolutionary process that generates artificial neural circuits for two fundamental cognitive tasks (motion detection and sound localization). Besides counting the frequency of problematic logic gates, we also test whether transfer entropy applied to an activity time-series recorded from behaving digital brains can infer information flow, compared to a ground-truth model of direct influence constructed from connectivity and circuit logic. Our results suggest that transfer entropy will sometimes fail to infer directed information when it exists, and sometimes suggest a causal connection when there is none. However, the extent of incorrect inference strongly depends on the cognitive task considered. These results emphasize the importance of understanding the fundamental logic processes that contribute to information flow in cognitive processing, and quantifying their relevance in any given nervous system.
A central goal of evolutionary biology is to explain the origins and distribution of diversity across life. Beyond species or genetic diversity, we also observe diversity in the circuits (genetic or otherwise) underlying complex functional traits. However, while the theory behind the origins and maintenance of genetic and species diversity has been studied for decades, theory concerning the origin of diverse functional circuits is still in its infancy. It is not known how many different circuit structures can implement any given function, which evolutionary factors lead to different circuits, and whether the evolution of a particular circuit was due to adaptive or non-adaptive processes. Here, we use digital experimental evolution to study the diversity of neural circuits that encode motion detection in digital (artificial) brains. We find that evolution leads to an enormous diversity of potential neural architectures encoding motion detection circuits, even for circuits encoding the exact same function. Evolved circuits vary in both redundancy and complexity (as previously found in genetic circuits) suggesting that similar evolutionary principles underlie circuit formation using any substrate. We also show that a simple (designed) motion detection circuit that is optimally-adapted gains in complexity when evolved further, and that selection for mutational robustness led this gain in complexity.
While cognitive theory has advanced several candidate frameworks to explain attentional entrainment, the neural basis for the temporal allocation of attention is unknown. Here we present a new model of attentional entrainment that is guided by empirical evidence obtained using a cohort of 50 artificial brains. These brains were evolved in silico to perform a duration judgement task similar to one where human subjects perform duration judgements in auditory oddball paradigms. We found that the artificial brains display psychometric characteristics remarkably similar to those of human listeners, and also exhibit similar patterns of distortions of perception when presented with out-of-rhythm oddballs. A detailed analysis of mechanisms behind the duration distortion in the artificial brains suggests that their attention peaks at the end of the tone, which is inconsistent with previous attentional entrainment models. Instead, our extended model of entrainment emphasises increased attention to those aspects of the stimulus that the brain expects to be highly informative.
Computational neuroscience attempts to build models of the brain that break cognition into basic elements. Here we study time perception in artificial brains, evolved over thousands of generations to judge the duration of tones, and compare the evolved brains' behavioral characteristics to human subjects performing the same task. We observe substantial similarities in psychometric properties in human subjects and digital brains with very similar perception artifacts, but also see differences due to different selective pressures during training or evolution. Our findings suggests that digital experimentation using brains evolved within a computer can advance computational cognitive neuroscience by discovering new cognitive mechanisms and heuristics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.