Small sample learning ability is one of the most significant characteristics of the human brain. However, its mechanism is yet to be fully unveiled. In recent years, brain-inspired artificial intelligence has become a very hot research domain. Researchers explored brain-inspired technologies or architectures to construct neural networks that could achieve human-alike intelligence. In this work, we presented our effort at evaluation of the effect of dynamic behavior and topology co-learning of neurons and synapses on the small sample learning ability of spiking neural network. Results show that the dynamic behavior and topology co-learning mechanism of neurons and synapses presented in our work could significantly reduce the number of required samples, while maintaining a reasonable performance on the MNIST data-set, resulting in a very lightweight neural network structure.
With the exponential growth of data vector data, solving classification or regression tasks by mining time series data has become a research hotspot. Commonly used methods include machine learning, artificial neural networks, and so on. However, these methods only extract the continuous or discrete features of sequences, which have the drawbacks of low information utilization, poor robustness, and computational complexity. To solve these problems, this paper innovatively uses Wasserstein distance instead of Kullback–Leibler divergence and uses it to construct an autoencoder to learn discrete features of time series. Then, a hidden Markov model is used to learn the continuous features of the sequence. Finally, stacking is used to ensemble the two models to obtain the final model. This paper experimentally verifies that the ensemble model has lower computational complexity and is close to state-of-the-art classification accuracy.
In neuroscience, the Default Mode Network (DMN), also known as the default network or the default-state network, is a large-scale brain network known to have highly correlated activities that are distinct from other networks in the brain. Many studies have revealed that DMNs can influence other cognitive functions to some extent. This paper is motivated by this idea and intends to further explore on how DMNs could help Spiking Neural Networks (SNNs) on image classification problems through an experimental study. The approach emphasizes the bionic meaning on model selection and parameters settings. For modeling, we select Leaky Integrate-and-Fire (LIF) as the neuron model, Additive White Gaussian Noise (AWGN) as the input DMN, and design the learning algorithm based on Spike-Timing-Dependent Plasticity (STDP). Then, we experiment on a two-layer SNN to evaluate the influence of DMN on classification accuracy, and on a three-layer SNN to examine the influence of DMN on structure evolution, where the results both appear positive. Finally, we discuss possible directions for future works.
Large-scale artificial neural networks have many redundant structures, making the network fall into the issue of local optimization and extended training time. Moreover, existing neural network topology optimization algorithms have the disadvantage of many calculations and complex network structure modeling. We propose a Dynamic Node-based neural network Structure optimization algorithm (DNS) to handle these issues. DNS consists of two steps: the generation step and the pruning step. In the generation step, the network generates hidden layers layer by layer until accuracy reaches the threshold. Then, the network uses a pruning algorithm based on Hebb’s rule or Pearson’s correlation for adaptation in the pruning step. In addition,, we combine genetic algorithm to optimize DNS(GA-DNS). Experimental results show that compared with traditional neural network topology optimization algorithms, GA-DNS can generate neural networks with higher construction efficiency, lower structure complexity, and higher classification accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.