Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Recently, graph convolutional networks (GCNs) have gained wide attention due to their ability to capture node relationships in graphs. One problem appears when full-batch GCN is trained on large graph datasets, where the computational and memory requirements are unacceptable. To address this issue, mini-batch GCN training is introduced to improve the scalability of GCN training for large datasets by sampling and training only a subset of the graph in each batch. Although several acceleration techniques have been designed for boosting the efficiency of full-batch GCN, they lack attention to mini-batch GCN, which differs from full-batch GCN in terms of the sampled dynamic graph structures. Based on our previous work GCNTrain [28], which was originally excogitated for accelerating full-batch GCN training, we devise GCNTrain+—a universal accelerator to tackle the performance bottlenecks associated with both full-batch and mini-batch GCN training. GCNTrain+ is equipped with two engines to optimize computation and memory access in GCN training, respectively. To reduce the computation overhead, we propose to dynamically reconfigure the computation order based on the varying data dimensions involved in each training batch. Moreover, we build a unified computation engine to perform the sparse-dense matrix multiplications (SpDM) and sparse-sparse matrix multiplications (SpSpM) discovered in GCN training uniformly. To alleviate the memory burden, we devise a two-phased dynamic clustering mechanism to capture data locality as well as customized hardware to reduce the clustering overhead. We evaluate GCNTrain+ on seven datasets, and the result shows that GCNTrain+ achieves 136.0 ×, 52.6 ×, 2.2 ×, and 1.5 × speedup over CPU, GPU, GCNAX, and GCNTrain in full-batch GCN training. Additionally, GCNTrain+ outperforms them with speedups of 131.6 ×, 67.1 ×, 4.4 ×, and 1.5 × in mini-batch GCN training.
Recently, graph convolutional networks (GCNs) have gained wide attention due to their ability to capture node relationships in graphs. One problem appears when full-batch GCN is trained on large graph datasets, where the computational and memory requirements are unacceptable. To address this issue, mini-batch GCN training is introduced to improve the scalability of GCN training for large datasets by sampling and training only a subset of the graph in each batch. Although several acceleration techniques have been designed for boosting the efficiency of full-batch GCN, they lack attention to mini-batch GCN, which differs from full-batch GCN in terms of the sampled dynamic graph structures. Based on our previous work GCNTrain [28], which was originally excogitated for accelerating full-batch GCN training, we devise GCNTrain+—a universal accelerator to tackle the performance bottlenecks associated with both full-batch and mini-batch GCN training. GCNTrain+ is equipped with two engines to optimize computation and memory access in GCN training, respectively. To reduce the computation overhead, we propose to dynamically reconfigure the computation order based on the varying data dimensions involved in each training batch. Moreover, we build a unified computation engine to perform the sparse-dense matrix multiplications (SpDM) and sparse-sparse matrix multiplications (SpSpM) discovered in GCN training uniformly. To alleviate the memory burden, we devise a two-phased dynamic clustering mechanism to capture data locality as well as customized hardware to reduce the clustering overhead. We evaluate GCNTrain+ on seven datasets, and the result shows that GCNTrain+ achieves 136.0 ×, 52.6 ×, 2.2 ×, and 1.5 × speedup over CPU, GPU, GCNAX, and GCNTrain in full-batch GCN training. Additionally, GCNTrain+ outperforms them with speedups of 131.6 ×, 67.1 ×, 4.4 ×, and 1.5 × in mini-batch GCN training.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.