2022
DOI: 10.48550/arxiv.2202.07114
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Recent Advances in Reliable Deep Graph Learning: Adversarial Attack, Inherent Noise, and Distribution Shift

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…However, in real-world scenarios, distribution shifts, such as covariate shifts, frequently occur between the labeled training set and the unlabeled testing set [32]. As a result, a GNN classifier may overfit to the irregularities of the training data, adversely affecting its performance post-deployment.…”
Section: Distribution Shifts On Graphsmentioning
confidence: 99%
“…However, in real-world scenarios, distribution shifts, such as covariate shifts, frequently occur between the labeled training set and the unlabeled testing set [32]. As a result, a GNN classifier may overfit to the irregularities of the training data, adversely affecting its performance post-deployment.…”
Section: Distribution Shifts On Graphsmentioning
confidence: 99%
“…There exist works aiming to estimate the uncertainty [48] for time series forecasting [71,96,36] by epistemic uncertainty. Nevertheless, the inevitable aleatoric uncertainty of time series is often ignored, which may stem from error-prone data measurement, collection, and so forth [97]. Another line of studies focuses on detecting noise in time series data [66] or devising suitable models for noise alleviation [33].…”
Section: A3 Uncertainty Estimation and Denoising For Time Series Fore...mentioning
confidence: 99%
“…Hence, developing robust GNNs is another important aspect of trustworthiness and many efforts have been taken. There are already several comprehensive surveys about adversarial attacks and defenses on graphs [29,87,171,197]. Therefore, in this section, we briefly give the overview of adversarial learning on graphs, but focus more on methods in emerging directions such as scalable attacks, graph backdoor attacks, and recent defense methods.…”
Section: Robustness Of Graph Neural Networkmentioning
confidence: 99%
“…Fair GNNs [35] and explainable GNNs [36,216] also become hot topics to address the concerns in trustworthiness. There are several surveys of GNNs in robustness [87,171,197,243] and explainability [221]. However, none of them thoroughly discuss about the trustworthiness of GNNs, which should also cover the dimensions of privacy and fairness.…”
Section: Introductionmentioning
confidence: 99%