In this paper, the transitions of burst synchronization are explored in a neuronal network consisting of subnetworks. The studied network is composed of electrically coupled bursting Hindmarsh-Rose neurons. Numerical results show that two types of burst synchronization transitions can be induced not only by the variations of intra- and intercoupling strengths but also by changing the probability of random links between different subnetworks and the number of subnetworks. Furthermore, we find that the underlying mechanisms for these two bursting synchronization transitions are different: one is due to the change of spike numbers per burst, while the other is caused by the change of the bursting type. Considering that changes in the coupling strengths and neuronal connections are closely interlaced with brain plasticity, the presented results could have important implications for the role of the brain plasticity in some functional behavior that are associated with synchronization.
The purpose of this study is to explore the low-frequency advantages and characteristics of the hydraulic mounts used for vibration isolation of an earth-moving machinery cab. For the feature of the cab’s center of mass being relatively high above the cab’s supporting surface, the pitch and roll vibrations of the cab are prone to generate in the low frequency range. A six-degree-of-freedom (d.f.) model of the cab supported by hydraulic mounts with quadratic damping is set up in this paper. And the simulation which compares performance of the hydraulic mounts and the rubber mounts used in the cab is carried out. It shows that the cab system with quadratic damping hydraulic mounts has remarkable efficiency to mitigate the vibrations and in turn to enhance the cab comfort, but its nonlinear damping characteristic has almost no effect on the natural frequencies of the cab system. A new approach is also proposed, which considers the absolute displacement of the pitch motion of the cab besides the traditional absolute accelerations, to improve the indications of the ride comfort for the suspended cab with a high-positioned mass center in the isolation design of the earth-moving machinery cab.
Recently, optical neural networks (ONNs) integrated in photonic chips has received extensive attention because they are expected to implement the same pattern recognition tasks in the electronic platforms with high efficiency and low power consumption. However, the current lack of various learning algorithms to train the ONNs obstructs their further development. In this article, we propose a novel learning strategy based on neuroevolution to design and train the ONNs. Two typical neuroevolution algorithms are used to determine the hyper-parameters of the ONNs and to optimize the weights (phase shifters) in the connections. In order to demonstrate the effectiveness of the training algorithms, the trained ONNs are applied in the classification tasks for iris plants dataset, wine recognition dataset and modulation formats recognition. The calculated results exhibit that the training algorithms based on neuroevolution are competitive with other traditional learning algorithms on both accuracy and stability. Compared with previous works, we introduce an efficient training method for the ONNs and demonstrate their broad application prospects in pattern recognition, reinforcement learning and so on. IntroductionArtificial neural networks (ANNs), deep learning [1] in particular, has attracted a great deal of research attentions for an impressively large number of applications, such as image processing [2], natural language processing [3], acoustical signal processing [4], time series processing [5], self-driving [6], games [7], robot [8] and so on. It should be noted that the training of the ANNs with deep hidden layers, especially for convolutional neural networks (CNNs) and recurrent neural networks (RNNs), for example AlexNet [9], VGGNet [10], GoogLeNet [11], ResNet [12] and long short-term memory [13], typically demands significant computational time and resources [1]. Thus, various electronic special-purpose platforms based on graphical processing units (GPUs) [14], field-programmable gate arrays (FPGAs) [15] and applicationspecific integrated circuits (ASICs) [16] were invented to accelerate the training and inference process of deep learning. On the other hand, in order to obtain general artificial intelligence, some brain-inspired chips including IBM TrueNorth [17], Intel Loihi [18], and SpiNNaker [19]were designed by imitating the structure of a brain. However, even both energy efficiency and speed were improved, the performances of the brain-inspired chips were difficult to compete with the state of the art of deep learning [20]. In the recent years, optical computing had been demonstrated as an effective alternative to traditionally electronic computing architectures and expected to alleviate the bandwidth bottlenecks and power consumption in electronics [21]. For example, new photonic approaches for spiking neuron and scalable network architecture based upon excitable lasers, broadcast-and-weight protocol and reservoir computing had been illustrated [22][23][24]. Despite ultrafast spiking response were achiev...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.