This paper presents a new methodology for the hardware implementation of neural networks (NNs) based on probabilistic laws. The proposed encoding scheme circumvents the limitations of classical stochastic computing (based on unipolar or bipolar encoding) extending the representation range to any real number using the ratio of two bipolar-encoded pulsed signals. Furthermore, the novel approach presents practically a total noise-immunity capability due to its specific codification. We introduce different designs for building the fundamental blocks needed to implement NNs. The validity of the present approach is demonstrated through a regression and a pattern recognition task. The low cost of the methodology in terms of hardware, along with its capacity to implement complex mathematical functions (such as the hyperbolic tangent), allows its use for building highly reliable systems and parallel computing.
The brain is characterized by performing many diverse processing tasks ranging from elaborate processes such as pattern recognition, memory or decision making to more simple functionalities such as linear filtering in image processing. Understanding the mechanisms by which the brain is able to produce such a different range of cortical operations remains a fundamental problem in neuroscience. Here we show a study about which processes are related to chaotic and synchronized states based on the study of in-silico implementation of Stochastic Spiking Neural Networks (SSNN). The measurements obtained reveal that chaotic neural ensembles are excellent transmission and convolution systems since mutual information between signals is minimized. At the same time, synchronized cells (that can be understood as ordered states of the brain) can be associated to more complex nonlinear computations. In this sense, we experimentally show that complex and quick pattern recognition processes arise when both synchronized and chaotic states are mixed. These measurements are in accordance with in vivo observations related to the role of neural synchrony in pattern recognition and to the speed of the real biological process. We also suggest that the high-level adaptive mechanisms of the brain that are the Hebbian and non-Hebbian learning rules can be understood as processes devoted to generate the appropriate clustering of both synchronized and chaotic ensembles. The measurements obtained from the hardware implementation of different types of neural systems suggest that the brain processing can be governed by the superposition of these two complementary states with complementary functionalities (nonlinear processing for synchronized states and information convolution and parallelization for chaotic).
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.
In this work we review the basic principles of stochastic logic and its application to the hardware implementation of Neural Networks. In this paper we show the mathematical basis of stochastic-based neurons along with the specific circuits that are needed to implement the processing of each neuron. We also propose a new methodology to reproduce the non-linear activation function. The proposed methodology can be used to implement any kind of Neural Network
Spiking Neural Networks, the last generation of Artificial Neural Networks, are characterized by its bio-inspired nature and by a higher computational capacity with respect to other neural models. In real biological neurons, stochastic processes represent an important mechanism of neural behavior and are responsible of its special arithmetic capabilities. In this work we present a simple hardware implementation of spiking neurons that considers this probabilistic nature. The advantage of the proposed implementation is that it is fully digital and therefore can be massively implemented in Field Programmable Gate Arrays. The high computational capabilities of the proposed model are demonstrated by the study of both feed-forward and recurrent networks that are able to implement high-speed signal filtering and to solve complex systems of linear equations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.