2023
DOI: 10.1155/2023/5418398
|View full text |Cite
|
Sign up to set email alerts
|

Feature-Based Graph Backdoor Attack in the Node Classification Task

Abstract: Graph neural networks (GNNs) have shown significant performance in various practical applications due to their strong learning capabilities. Backdoor attacks are a type of attack that can produce hidden attacks on machine learning models. GNNs take backdoor datasets as input to produce an adversary-specified output on poisoned data but perform normally on clean data, which can have grave implications for applications. Backdoor attacks are under-researched in the graph domain, and almost existing graph backdoor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 33 publications
0
1
0
Order By: Relevance
“…The defense methods at this stage primarily focus on data poisoning backdoor attacks. The existing works show reasonable defense performance on label-flipping backdoor attack [186], [187] and trigger-based backdoor attack [188], [189]. In the rest of this section, two typical defense methods implemented during the local training phase are analyzed, and their limitations are discussed accordingly.…”
Section: A Defense At Local Training Phasementioning
confidence: 99%
“…The defense methods at this stage primarily focus on data poisoning backdoor attacks. The existing works show reasonable defense performance on label-flipping backdoor attack [186], [187] and trigger-based backdoor attack [188], [189]. In the rest of this section, two typical defense methods implemented during the local training phase are analyzed, and their limitations are discussed accordingly.…”
Section: A Defense At Local Training Phasementioning
confidence: 99%