We present a model for spike-driven dynamics of a plastic synapse, suited for aVLSI implementation. The synaptic device behaves as a capacitor on short timescales and preserves the memory of two stable states (efficacies) on long timescales. The transitions (LTP/LTD) are stochastic because both the number and the distribution of neural spikes in any finite (stimulation) interval fluctuate, even at fixed pre- and postsynaptic spike rates. The dynamics of the single synapse is studied analytically by extending the solution to a classic problem in queuing theory (Takacs process). The model of the synapse is implemented in aVLSI and consists of only 18 transistors. It is also directly simulated. The simulations indicate that LTP/LTD probabilities versus rates are robust to fluctuations of the electronic parameters in a wide range of rates. The solutions for these probabilities are in very good agreement with both the simulations and measurements. Moreover, the probabilities are readily manipulable by variations of the chip's parameters, even in ranges where they are very small. The tests of the electronic device cover the range from spontaneous activity (3-4 Hz) to stimulus-driven rates (50 Hz). Low transition probabilities can be maintained in all ranges, even though the intrinsic time constants of the device are short (approximately 100 ms). Synaptic transitions are triggered by elevated presynaptic rates: for low presynaptic rates, there are essentially no transitions. The synaptic device can preserve its memory for years in the absence of stimulation. Stochasticity of learning is a result of the variability of interspike intervals; noise is a feature of the distributed dynamics of the network. The fact that the synapse is binary on long timescales solves the stability problem of synaptic efficacies in the absence of stimulation. Yet stochastic learning theory ensures that it does not affect the collective behavior of the network, if the transition probabilities are low and LTP is balanced against LTD.
An efficient framework for the optimal control of probability density functions (PDFs) of multidimensional stochastic processes is presented. This framework is based on the Fokker–Planck equation that governs the time evolution of the PDF of stochastic processes and on tracking objectives of terminal configuration of the desired PDF. The corresponding optimization problems are formulated as a sequence of open-loop optimality systems in a receding-horizon control strategy. Many theoretical results concerning the forward and the optimal control problem are provided. In particular, it is shown that under appropriate assumptions the open-loop bilinear control function is unique. The resulting optimality system is discretized by the Chang–Cooper scheme that guarantees positivity of the forward solution. The effectiveness of the proposed computational framework is validated with a stochastic Lotka–Volterra model and a noised limit cycle model
Abstract. A Fokker-Planck framework for the formulation of an optimal control strategy of stochastic processes is presented. Within this strategy, the control objectives are defined based on the probability density functions of the stochastic processes. The optimal control is obtained as the minimizer of the objective under the constraint given by the Fokker-Planck model. Representative stochastic processes are considered with different control laws and with the purpose of attaining a final target configuration or tracking a desired trajectory. In this latter case, a receding-horizon algorithm over a sequence of time windows is implemented.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.