A model of computation describes how units of information and an organizational architecture transform input to output. In the cerebral cortex the unit of information is a normalized collection of neuronal membrane potentials representing a vector of complex amplitudes. The organizational architecture of the cortex allows for programming operator matrices in dendritic arborizations on vectors of complex amplitudes. Three well-documented facts are sufficient to define the cortical model of computation: first, cortical dendritic inputs to compute membrane potentials are, in general, complex numbers (amplitudes) that may be configured as state vectors in complex Hilbert spaces; second, normalization is a canonical neural computation, specifically, for vectors of cortical membrane amplitudes representing orthogonal states; and, third, dendritic arborizations are programmable devices in a universal sense for mathematical and logical functions. These three facts, well studied and accepted for years, are all that is needed for a simple, well-defined, and well-known model of computation. A great deal of research is presently devoted to understanding particular neural algorithms, circuits, and programming. Understanding the algorithm that a particular circuit implements is not to be confused with identifying the much more basic underlying model of computation. The analogy for classical computing is that the classical model of computing stores and manipulates information fundamentally in the form of bits – essentially a finite set of possible states, typically, {0,1}. The primitive classical container of information is analogous to a two-state switch. Nevertheless, sophisticated computational circuitry and algorithms are developed on that simple classical realization. In contrast to the finite set of states found in the fundamental unit of classical information, the fundamental units of information in the cerebral cortex form a model of computation over the complex field in normalized sets of mutually inhibitory amplitudes. This means that the cortex represents information in a fundamental container of information (a vector of amplitudes in membrane potentials) that can encode discrete states over a continuum of values (no doubt down to some constant.) A well-known result from probability theory and standard neuroimaging analysis argue that these mutually inhibitory fundamental units of information may normalize complex amplitudes under the 2-norm. Sufficiently sophisticated computational means are available in the cortex to program unitary operators, state reduction and expansion operators and tensor products that operate on vectors of normalized membrane amplitudes. This model of computation is typically viewed as a probabilistic or predictive model of computation. Again, the model of computation is not a particular circuit or implementation of any one algorithm. The model of computation is the arena in which cortical circuits and algorithms are implemented. The fundamental arena then for cortical computations is a field of complex amplitudes normalized for probabilistic computation.
No, this does NOT require that your brain uses entangled atomic particles for computational primitives; No, this does NOT grant your brain super(computer) powers; No, this does NOT instantly give you a deep understanding of probability theory; No, this does NOT stretch your brain out across parallel multiverses; and, No, this does NOT allow your brain to (necessarily) factor large integers or solve any instance of an NP-Hard problem in polynomial time. Rather, what this DOES is give your brain a model of computation evolved for representing probabilistic states of the world with information encoded in vectors of positive or negative complexnumbers.
The cortex forms a model of computation over the complex field. Mutually inhibitory logical primitives normalize amplitudes under the 2-norm. A sufficient class of linear unitary operators exists to support a universal model of computation. Recent results show that the medial entorhinal cortex constructs a representation of spacetime from this underlying model. The lattice-like computational history is automatically generally covariant and background independent under transformations providing for derivation of an Einstein-Regge spacetime.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.