With POWER8 a new generation of POWER processors became available. This architecture features a moderate number of cores, each of which expose a high amount of instruction-level as well as threadlevel parallelism. The high-performance processing capabilities are integrated with a rich memory hierarchy providing high bandwidth through a large set of memory chips. For a set of applications with significantly different performance signatures we explore efficient use of this processor architecture.
Neuroscience models commonly have a high number of degrees of freedom and only specific regions within the parameter space are able to produce dynamics of interest. This makes the development of tools and strategies to efficiently find these regions of high importance to advance brain research. Exploring the high dimensional parameter space using numerical simulations has been a frequently used technique in the last years in many areas of computational neuroscience. Today, high performance computing (HPC) can provide a powerful infrastructure to speed up explorations and increase our general understanding of the behavior of the model in reasonable times. Learning to learn (L2L) is a well-known concept in machine learning (ML) and a specific method for acquiring constraints to improve learning performance. This concept can be decomposed into a two loop optimization process where the target of optimization can consist of any program such as an artificial neural network, a spiking network, a single cell model, or a whole brain simulation. In this work, we present L2L as an easy to use and flexible framework to perform parameter and hyper-parameter space exploration of neuroscience models on HPC infrastructure. Learning to learn is an implementation of the L2L concept written in Python. This open-source software allows several instances of an optimization target to be executed with different parameters in an embarrassingly parallel fashion on HPC. L2L provides a set of built-in optimizer algorithms, which make adaptive and efficient exploration of parameter spaces possible. Different from other optimization toolboxes, L2L provides maximum flexibility for the way the optimization target can be executed. In this paper, we show a variety of examples of neuroscience models being optimized within the L2L framework to execute different types of tasks. The tasks used to illustrate the concept go from reproducing empirical data to learning how to solve a problem in a dynamic environment. We particularly focus on simulations with models ranging from the single cell to the whole brain and using a variety of simulation engines like NEST, Arbor, TVB, OpenAIGym, and NetLogo.
A variety of software simulators exist for neuronal networks, and a subset of these tools allow the scientist to model neurons in high morphological detail. The scalability of such simulation tools over a wide range in neuronal networks sizes and cell complexities is predominantly limited by effective allocation of components of such simulations over computational nodes, and the overhead in communication between them. In order to have more scalable simulation software, it is therefore important to develop a robust benchmarking strategy that allows insight into specific computational bottlenecks for models of realistic size and complexity. In this study, we demonstrate the use of the Brain Scaffold Builder (BSB; De Schepper et al., 2021) as a framework for performing such benchmarks. We perform a comparison between the well-known neuromorphological simulator NEURON (Carnevale and Hines, 2006), and Arbor (Abi Akar et al., 2019), a new simulation library developed within the framework of the Human Brain Project. The BSB can construct identical neuromorphological and network setups of highly spatially and biophysically detailed networks for each simulator. This ensures good coverage of feature support in each simulator, and realistic workloads. After validating the outputs of the BSB generated models, we execute the simulations on a variety of hardware configurations consisting of two types of nodes (GPU and CPU). We investigate performance of two different network models, one suited for a single machine, and one for distributed simulation. We investigate performance across different mechanisms, mechanism classes, mechanism combinations, and cell types. Our benchmarks show that, depending on the distribution scheme deployed by Arbor, a speed-up with respect to NEURON of between 60 and 400 can be achieved. Additionally Arbor can be up to two orders of magnitude more energy efficient.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.