Methods on modelling the human brain as a Complex System have increased remarkably in the literature as researchers seek to understand the underlying foundations behind cognition, behaviour, and perception. Computational methods, especially Graph Theory-based methods, have recently contributed significantly in understanding wiring connectivity of the brain, modelling it as a set of nodes connected by edges. Therefore, the brain's spatiotemporal dynamics can be holistically studied by considering a network, which consists of many neurons, represented by nodes. Various models have been proposed for modelling such neurons. A recently proposed method in training such networks, called full-Force, produces networks that perform tasks with fewer neurons and greater noise robustness than previous least-squares approaches (i.e. FORCE method). In this paper, the first direct applicability of a variant of the full-Force method to biologically-motivated Spiking RNNs (SRNNs) is demonstrated. The SRNN is a graph consisting of modules. Each module is modelled as a Small-World Network (SWN), which is a specific type of a biologically-plausible graph. So, the first direct applicability of a variant of the full-Force method to modular SWNs is demonstrated, evaluated through regression and information theoretic metrics. For the first time, the aforementioned method is applied to spiking neuron models and trained on various real-life Electroencephalography (EEG) signals. To the best of the authors' knowledge, all the contributions of this paper are novel. Results show that trained SRNNs match EEG signals almost perfectly, while network dynamics can mimic the target dynamics. This demonstrates that the holistic setup of the network model and the neuron model which are both more biologically plausible than previous work, can be tuned into real biological signal dynamics.
Methods on modelling the human brain as a Complex System have increased remarkably in the literature as researchers seek to understand the underlying foundations behind cognition, behaviour, and perception. Computational methods, especially Graph Theory-based methods, have recently contributed significantly in understanding the wiring connectivity of the brain, modelling it as a set of nodes connected by edges. Therefore, the brain’s spatiotemporal dynamics can be holistically studied by considering a network, which consists of many neurons, represented by nodes. Various models have been proposed for modelling such neurons. A recently proposed method in training such networks, called full-Force, produces networks that perform tasks with fewer neurons and greater noise robustness than previous least-squares approaches (i.e. FORCE method). In this paper, the first direct applicability of a variant of the full-Force method to biologically-motivated Spiking RNNs (SRNNs) is demonstrated. The SRNN is a graph consisting of modules. Each module is modelled as a Small-World Network (SWN), which is a specific type of a biologically-plausible graph. So, the first direct applicability of a variant of the full-Force method to modular SWNs is demonstrated, evaluated through regression and information theoretic metrics. For the first time, the aforementioned method is applied to spiking neuron models and trained on various real-life Electroencephalography (EEG) signals. To the best of the authors’ knowledge, all the contributions of this paper are novel. Results show that trained SRNNs match EEG signals almost perfectly, while network dynamics can mimic the target dynamics. This demonstrates that the holistic setup of the network model and the neuron model which are both more biologically plausible than previous work, can be tuned into real biological signal dynamics.
Neural attention (NA) has become a key component of sequence-to-sequence models that yield state-of-the-art performance in as hard tasks as abstractive document summarization (ADS) and video captioning (VC). NA mechanisms perform inference of context vectors; these constitute weighted sums of deterministic input sequence encodings, adaptively sourced over long temporal horizons. Inspired from recent work in the field of amortized variational inference (AVI), in this work we consider treating the context vectors generated by soft-attention (SA) models as latent variables, with approximate finite mixture model posteriors inferred via AVI. We posit that this formulation may yield stronger generalization capacity, in line with the outcomes of existing applications of AVI to deep networks. To illustrate our method, we implement it and experimentally evaluate it considering challenging ADS and VC benchmarks. This way, we exhibit its improved effectiveness over state-of-the-art alternatives.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.