Proceedings of the ACM Web Conference 2022 2022
DOI: 10.1145/3485447.3511986
|View full text |Cite
|
Sign up to set email alerts
|

PaSca: A Graph Neural Architecture Search System under the Scalable Paradigm

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
20
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
6
4

Relationship

1
9

Authors

Journals

citations
Cited by 41 publications
(20 citation statements)
references
References 40 publications
0
20
0
Order By: Relevance
“…The resulted DAG has 9 choices, as illustrated in Fig 1 . Those intermediate nodes without successor nodes are connected to the output node by concatenation. Besides this macro space, we also consider optional fully-connected pre-process and post-process layers as in [60,62]. Notice that to avoid exploding the search space, we consider the numbers of pre-process and post-process layers as hyper-parameters, which will be discussed in Section 3.3.…”
Section: Search Space Designmentioning
confidence: 99%
“…The resulted DAG has 9 choices, as illustrated in Fig 1 . Those intermediate nodes without successor nodes are connected to the output node by concatenation. Besides this macro space, we also consider optional fully-connected pre-process and post-process layers as in [60,62]. Notice that to avoid exploding the search space, we consider the numbers of pre-process and post-process layers as hyper-parameters, which will be discussed in Section 3.3.…”
Section: Search Space Designmentioning
confidence: 99%
“…PaSca [132] is a novel paradigm and system that offers Bayesian optimization to systematically construct and explore the design space for scalable GNNs, rather than individual designs. PaSca proposes a novel abstraction called SGAP to address data and model scalability issues.…”
Section: Systems On Gpu Clustersmentioning
confidence: 99%
“…Most existing GNNs need to repeatedly perform the computationally expensive and recursive feature smoothing, which involves the participation of the entire graph at each training epoch. (Zhang et al, 2022) Furthermore, most methods adopt the same training loss function as GAE, which introduces high memory usage by storing the dense-form adjacency matrix on GPU. For a graph of size 200 million, its dense-form adjacency matrix requires a space of roughly 150GB, exceeding the memory capacity of the current powerful GPU devices.…”
Section: Introductionmentioning
confidence: 99%