2022
DOI: 10.48550/arxiv.2204.13429
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DOTIN: Dropping Task-Irrelevant Nodes for GNNs

Abstract: Scalability is an important consideration for deep graph neural networks. Inspired by the conventional pooling layers in CNNs, many recent graph learning approaches have introduced the pooling strategy to reduce the size of graphs for learning, such that the scalability and efficiency can be improved. However, these pooling-based methods are mainly tailored to a single graph-level task and pay more attention to local information, limiting their performance in multi-task settings which often require task-specif… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…Traditional bias metrics such as ∆ SP for statistical parity and ∆ EO for equal opportunity are computed on the predicted class labels. However, a single training node can hardly twist these predicted labels (Zhang et al 2022a;Sun et al 2020).…”
Section: Probabilistic Distribution Disparitymentioning
confidence: 99%
See 1 more Smart Citation
“…Traditional bias metrics such as ∆ SP for statistical parity and ∆ EO for equal opportunity are computed on the predicted class labels. However, a single training node can hardly twist these predicted labels (Zhang et al 2022a;Sun et al 2020).…”
Section: Probabilistic Distribution Disparitymentioning
confidence: 99%
“…However, these metrics are not applicable in our task. The reason is that most of them are computed based on the predicted labels, while a single training node can barely twist these predicted labels on test data (Zhang et al 2022a;Sun et al 2020). Consequently, the influence of a single training node on the model bias would be hard to capture.…”
Section: Introductionmentioning
confidence: 99%