“…Experimental Setup: We compare the performance of the proposed BASGCN model on graph classification applications with a) six alternative state-of-the-art graph kernels and b) twelve alternative state-of-the-art deep learning methods for graphs. Specifically, the graph kernels include 1) the Jensen-Tsallis q-difference kernel (JTQK) with q = 2 [37], 2) the Weisfeiler-Lehman subtree kernel (WLSK) [11], 3) the shortest path graph kernel (SPGK) [38], 4) the shortest path kernel based on core variants (CORE SP) [39], 5) the random walk graph kernel (RWGK) [40], and 6) the graphlet count kernel (GK) [41]. On the other hand, the deep learning methods include 1) the deep graph convolutional neural network (DGCNN) [31], 2) the PATCHY-SAN based convolutional neural network for graphs (PSGCNN) [32], 3) the diffusion convolutional neural network (DCNN) [25], 4) the deep graphlet kernel (DGK) [42], 5) the graph capsule convolutional neural network (GCCNN) [43], 6) the anonymous walk embeddings based on feature driven (AWE) [44], 7) the edge-conditioned convolutional networks (ECC) [45], 8) the high-order graph convolution network (HO-GCN) [46], 9) the graph convolution network based on Differentiable Pooling (DiffPool) [47], 10) the graph convolution network based on Self-Attention Pooling (SAGPool) [48], 11) the graph convolutional network with EigenPooling (EigenPool) [48], and 12) the degree-specific graph neural networks (DEMO-Net) [49].…”