The characterization of gray matter morphology of individual brains is an important issue in neuroscience. Graph theory has been used to describe cortical morphology, with networks based on covariation of gray matter volume or thickness between cortical areas across people. Here, we extend this research by proposing a new method that describes the gray matter morphology of an individual cortex as a network. In these large-scale morphological networks, nodes represent small cortical regions, and edges connect regions that have a statistically similar structure. The method was applied to a healthy sample (n = 14, scanned at 2 different time points). For all networks, we described the spatial degree distribution, average minimum path length, average clustering coefficient, small world property, and betweenness centrality (BC). Finally, we studied the reproducibility of all these properties. The networks showed more clustering than random networks and a similar minimum path length, indicating that they were "small world." The spatial degree and BC distributions corresponded closely to those from group-derived networks. All network property values were reproducible over the 2 time points examined. Our results demonstrate that intracortical similarities can be used to provide a robust statistical description of individual gray matter morphology.
Several studies have shown that the information conveyed by bell-shaped tuning curves increases as their width decreases, leading to the notion that sharpening of tuning curves improves population codes. This notion, however, is based on assumptions that the noise distribution is independent among neurons and independent of the tuning curve width. Here we reexamine these assumptions in networks of spiking neurons by using orientation selectivity as an example. We compare two principal classes of model: one in which the tuning curves are sharpened through cortical lateral interactions, and one in which they are not. We report that sharpening through lateral interactions does not improve population codes but, on the contrary, leads to a severe loss of information. In addition, the sharpening models generate complicated codes that rely extensively on pairwise correlations. Our study generates several experimental predictions that can be used to distinguish between these two classes of model.
Neural activity and perception are both affected by sensory history. The work presented here explores the relationship between the physiological effects of adaptation and their perceptual consequences. Perception is modeled as arising from an encoder-decoder cascade, in which the encoder is defined by the probabilistic response of a population of neurons, and the decoder transforms this population activity into a perceptual estimate. Adaptation is assumed to produce changes in the encoder, and we examine the conditions under which the decoder behavior is consistent with observed perceptual effects in terms of both bias and discriminability. We show that for all decoders, discriminability is bounded from below by the inverse Fisher information. Estimation bias, on the other hand, can arise for a variety of different reasons and can range from zero to substantial. We specifically examine biases that arise when the decoder is fixed, "unaware" of the changes in the encoding population (as opposed to "aware" of the adaptation and changing accordingly). We simulate the effects of adaptation on two well-studied sensory attributes, motion direction and contrast, assuming a gain change description of encoder adaptation. Although we cannot uniquely constrain the source of decoder bias, we find for both motion and contrast that an "unaware" decoder that maximizes the likelihood of the percept given by the preadaptation encoder leads to predictions that are consistent with behavioral data. This model implies that adaptation-induced biases arise as a result of temporary suboptimality of the decoder.
Expectations broadly influence our experience of the world. However, the process by which they are acquired and then shape our sensory experiences is not well understood. Here, we examined whether expectations of simple stimulus features can be developed implicitly through a fast statistical learning procedure. We found that participants quickly and automatically developed expectations for the most frequently presented directions of motion and that this altered their perception of new motion directions, inducing attractive biases in the perceived direction as well as visual hallucinations in the absence of a stimulus. Further, the biases in motion direction estimation that we observed were well explained by a model that accounted for participants' behavior using a Bayesian strategy, combining a learned prior of the stimulus statistics (the expectation) with their sensory evidence (the actual stimulus) in a probabilistically optimal manner. Our results demonstrate that stimulus expectations are rapidly learned and can powerfully influence perception of simple visual features.
The spiking response of a primary visual cortical cell to a stimulus placed within its receptive field can be up- and down-regulated by the simultaneous presentation of objects or scenes placed in the "silent" regions which surround the receptive field. We here review recent progresses that have been made both at the experimental and theoretical levels in the description of these so-called "Center/Surround" modulations and in the understanding of their neural basis. Without denying the role of a modulatory feedback from higher cortical areas, recent results support the view that some of these phenomena result from the dynamic interplay between feedforward projections and horizontal intracortical connectivity in V1. Uncovering the functional role of the contextual periphery of cortical receptive fields has become an area of active investigation. The detailed comparison of electrophysiological and psychophysical data reveals strong correlations between the integrative behavior of V1 cells and some aspects of "low-level" and "mid-level" conscious perception. These suggest that as early as the V1 stage, the visual system is able to make use of contextual cues to recover local visual scene properties or correct their interpretation. Promising ideas have emerged on the importance of such a strategy for the coding of visual scenes, and the processing of static and moving objects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.