“…One also has GNNs based on spectral graph information such as, e.g., (Bruna et al, 2014;Defferrard et al, 2016;Gama et al, 2019;Kipf and Welling, 2017;Levie et al, 2019;Monti et al, 2017;Balcilar et al, 2021b). Some GNN architectures can employ vertex identifiers (Murphy et al, 2019;Vignac et al, 2020), use random features (Abboud et al, 2021;Dasoulas et al, 2020;Sato et al, 2021), equivariant graph polynomials (Puny et al, 2023), homomorphism and subgraph counts (Barceló et al, 2021;Bouritsas et al, 2020;Nguyen and Maehara, 2020), simplicial (Bodnar et al, 2021b) and cellular complexes (Bodnar et al, 2021a), persistent homology (Horn et al, 2022), random walks (Tönshoff et al, 2021;Martinkus et al, 2022), graph decompositions (Talak et al, 2021), relational , distance (Li et al, 2020) and directional information (Beaini et al, 2021), subgraph information Cotta et al, 2021;Feng et al, 2022;Huang et al, 2023;Papp et al, 2021;Qian et al, 2022;Thiede et al, 2021;Wijesinghe and Wang, 2022;You et al, 2021;Zhang and Li, 2021;Zhao et al, 2022;Zhang et al, 2023a), and biconnectivity (Zhang et al, 2023b). Examples of graph neural network architectures using higher-order p-vertex embeddings for p ≥ 2 are e.g., (Azizian and Lelarge, 2021;…”