We study variants of hierarchical modular network models suggested by Kaiser and Hilgetag [ Front. in Neuroinform., 4 (2010) 8] to model functional brain connectivity, using extensive simulations and quenched mean-field theory (QMF), focusing on structures with a connection probability that decays exponentially with the level index. Such networks can be embedded in two-dimensional Euclidean space. We explore the dynamic behavior of the contact process (CP) and threshold models on networks of this kind, including hierarchical trees. While in the small-world networks originally proposed to model brain connectivity, the topological heterogeneities are not strong enough to induce deviations from mean-field behavior, we show that a Griffiths phase can emerge under reduced connection probabilities, approaching the percolation threshold. In this case the topological dimension of the networks is finite, and extended regions of bursty, power-law dynamics are observed. Localization in the steady state is also shown via QMF. We investigate the effects of link asymmetry and coupling disorder, and show that localization can occur even in small-world networks with high connectivity in case of link disorder.
The current design trend in large scale machine learning is to use distributed clusters of CPUs and GPUs with MapReduce-style programming. Some have been led to believe that this type of horizontal scaling can reduce or even eliminate the need for traditional algorithm development, careful parallelization, and performance engineering. This paper is a case study showing the contrary: that the benefits of algorithms, parallelization, and performance engineering, can sometimes be so vast that it is possible to solve "clusterscale" problems on a single commodity multicore machine. Connectomics is an emerging area of neurobiology that uses cutting edge machine learning and image processing to extract brain connectivity graphs from electron microscopy images. It has long been assumed that the processing of connectomics data will require mass storage, farms of CPU/GPUs, and will take months (if not years) of processing time. We present a high-throughput connectomics-ondemand system that runs on a multicore machine with less than 100 cores and extracts connectomes at the terabyte per hour pace of modern electron microscopes. CCS Concepts • Computing methodologies → Concurrent algorithms; • Computer systems organization → Multicore architectures; • Computing methodologies → Neural networks; • Applied computing → Computational biology
We study a phase retrieval problem in the Poisson noise model. Motivated by the PhaseLift approach, we approximate the maximumlikelihood estimator by solving a convex program with a nuclear norm constraint. While the Frank-Wolfe algorithm, together with the Lanczos method, can efficiently deal with nuclear norm constraints, our objective function does not have a Lipschitz continuous gradient, and hence existing convergence guarantees for the Frank-Wolfe algorithm do not apply. In this paper, we show that the Frank-Wolfe algorithm works for the Poisson phase retrieval problem, and has a global convergence rate of O(1/t), where t is the iteration counter. We provide rigorous theoretical guarantee and illustrating numerical results.
The current design trend in large scale machine learning is to use distributed clusters of CPUs and GPUs with MapReduce-style programming. Some have been led to believe that this type of horizontal scaling can reduce or even eliminate the need for traditional algorithm development, careful parallelization, and performance engineering. This paper is a case study showing the contrary: that the benefits of algorithms, parallelization, and performance engineering, can sometimes be so vast that it is possible to solve "cluster-scale" problems on a single commodity multicore machine. Connectomics is an emerging area of neurobiology that uses cutting edge machine learning and image processing to extract brain connectivity graphs from electron mi-croscopy images. It has long been assumed that the processing of connectomics data will require mass storage, farms of CPU/GPUs, and will take months (if not years) of processing time. We present a high-throughput connectomics-on-demand system that runs on a multicore machine with less than 100 cores and extracts connectomes at the terabyte per hour pace of modern electron microscopes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.