There is growing interest in studying large scale graphs having millions of vertices and billions of edges, up to the point that a specific benchmark, called Graph500, has been defined to measure the performance of graph algorithms on modern computing architectures. At first glance, Graphics Processing Units (GPUs) are not an ideal platform for the execution of graph algorithms that are characterized by low arithmetic intensity and irregular memory access patterns. For studying really large graphs, multiple GPUs are required to overcome the memory size limitations of a single GPU. In the present paper, we propose several techniques to minimize the communication among GPUs
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.