Stochastic resonance is said to be observed when increases in levels of unpredictable fluctuations—e.g., random noise—cause an increase in a metric of the quality of signal transmission or detection performance, rather than a decrease. This counterintuitive effect relies on system nonlinearities and on some parameter ranges being “suboptimal”. Stochastic resonance has been observed, quantified, and described in a plethora of physical and biological systems, including neurons. Being a topic of widespread multidisciplinary interest, the definition of stochastic resonance has evolved significantly over the last decade or so, leading to a number of debates, misunderstandings, and controversies. Perhaps the most important debate is whether the brain has evolved to utilize random noise in vivo, as part of the “neural code”. Surprisingly, this debate has been for the most part ignored by neuroscientists, despite much indirect evidence of a positive role for noise in the brain. We explore some of the reasons for this and argue why it would be more surprising if the brain did not exploit randomness provided by noise—via stochastic resonance or otherwise—than if it did. We also challenge neuroscientists and biologists, both computational and experimental, to embrace a very broad definition of stochastic resonance in terms of signal-processing “noise benefits”, and to devise experiments aimed at verifying that random variability can play a functional role in the brain, nervous system, or other areas of biology.
Although typically assumed to degrade performance, random fluctuations, or noise, can sometimes improve information processing in non-linear systems. One such form of 'stochastic facilitation', stochastic resonance, has been observed to enhance processing both in theoretical models of neural systems and in experimental neuroscience. However, the two approaches have yet to be fully reconciled. Understanding the diverse roles of noise in neural computation will require the design of experiments based on new theory and models, into which biologically appropriate experimental detail feeds back at various levels of abstraction.
Abstract-In this paper we investigate the benefit of augmenting data with synthetically created samples when training a machine learning classifier. Two approaches for creating additional training samples are data warping, which generates additional samples through transformations applied in the data-space, and synthetic over-sampling, which creates additional samples in feature-space. We experimentally evaluate the benefits of data augmentation for a convolutional backpropagation-trained neural network, a convolutional support vector machine and a convolutional extreme learning machine classifier, using the standard MNIST handwritten digit dataset. We found that while it is possible to perform generic augmentation in feature-space, if plausible transforms for the data are known then augmentation in data-space provides a greater benefit for improving performance and reducing overfitting.
The sigmoidal tuning curve that maximizes the mutual information for a Poisson neuron, or population of Poisson neurons, is obtained. The optimal tuning curve is found to have a discrete structure that results in a quantization of the input signal. The number of quantization levels undergoes a hierarchy of phase transitions as the length of the coding window is varied. We postulate, using the mammalian auditory system as an example, that the presence of a subpopulation structure within a neural population is consistent with an optimal neural code.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.