Development of computer science has led to the blooming of artificial intelligence (AI), and neural networks are the core of AI research. Although mainstream neural networks have done well in the fields of image processing and speech recognition, they do not perform well in models aimed at understanding contextual information. In our opinion, the reason for this is that the essence of building a neural network through parameter training is to fit the data to the statistical law through parameter training. Since the neural network built using this approach does not possess memory ability, it cannot reflect the relationship between data with respect to the causality. Biological memory is fundamentally different from the current mainstream digital memory in terms of the storage method. The information stored in digital memory is converted to binary code and written in separate storage units. This physical isolation destroys the correlation of information. Therefore, the information stored in digital memory does not have the recall or association functions of biological memory which can present causality. In this paper, we present the results of our preliminary effort at constructing an associative memory system based on a spiking neural network. We broke the neural network building process into two phases: the Structure Formation Phase and the Parameter Training Phase. The Structure Formation Phase applies a learning method based on Hebb's rule to provoke neurons in the memory layer growing new synapses to connect to neighbor neurons as a response to the specific input spiking sequences fed to the neural network. The aim of this phase is to train the neural network to memorize the specific input spiking sequences. During the Parameter Training Phase, STDP and reinforcement learning are employed to optimize the weight of synapses and thus to find a way to let the neural network recall the memorized specific input spiking sequences. The results show that our memory neural network could memorize different targets and could recall the images it had memorized.
Small sample learning ability is one of the most significant characteristics of the human brain. However, its mechanism is yet to be fully unveiled. In recent years, brain-inspired artificial intelligence has become a very hot research domain. Researchers explored brain-inspired technologies or architectures to construct neural networks that could achieve human-alike intelligence. In this work, we presented our effort at evaluation of the effect of dynamic behavior and topology co-learning of neurons and synapses on the small sample learning ability of spiking neural network. Results show that the dynamic behavior and topology co-learning mechanism of neurons and synapses presented in our work could significantly reduce the number of required samples, while maintaining a reasonable performance on the MNIST data-set, resulting in a very lightweight neural network structure.
In neuroscience, the Default Mode Network (DMN), also known as the default network or the default-state network, is a large-scale brain network known to have highly correlated activities that are distinct from other networks in the brain. Many studies have revealed that DMNs can influence other cognitive functions to some extent. This paper is motivated by this idea and intends to further explore on how DMNs could help Spiking Neural Networks (SNNs) on image classification problems through an experimental study. The approach emphasizes the bionic meaning on model selection and parameters settings. For modeling, we select Leaky Integrate-and-Fire (LIF) as the neuron model, Additive White Gaussian Noise (AWGN) as the input DMN, and design the learning algorithm based on Spike-Timing-Dependent Plasticity (STDP). Then, we experiment on a two-layer SNN to evaluate the influence of DMN on classification accuracy, and on a three-layer SNN to examine the influence of DMN on structure evolution, where the results both appear positive. Finally, we discuss possible directions for future works.
Recent years, there has been an ever increasing interest and investment on Artificial Intelligence (AI), both academic and industrial. As the hotspots in AI, Artificial Neural Networks (ANNs) have already been applied to a lot of different applications. However, traditional ANNs have disadvantages, such as fixed and redundant structure, resulting in requirement of large amount of training data and training time. Biological researches have shown that the biological neural network behaves in a more flexible way, with synapses building or withering according to requirement. In this paper, we present a Correlation Analysis Based Neural Network Self-Organizing Genetic Evolutionary Algorithm. Based on correlation analysis of training process, self-organizing combined with genetic evolutionary algorithm is applied to improve the performance efficiency and structural efficiency of the built neural network. Results show that our algorithm could generate neural networks with more compact structure and reasonable classification accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.