“…The focus of our work is on accelerating large-scale GNN training with an ISP architecture. There is a large body of prior literature exploring in-storage/near-data processing [1], [5], [10], [12], [13], [16], [25], [27], [31]- [33], [35], [38], [38], [39], [42], [49], [51], [54], [58], [65], [67], [72]- [74], [76], [81], [81]- [83] or in-memory processing [2]- [4], [9], [18], [20], [23], [29], [34], [36], [37], [44], [45], [50], [60], [68], [79] architectures for data-intensive workloads as well as ASIC/FPGA/GPU based acceleration for graph neural networks [24], [40], [52], [54]-…”