2021
DOI: 10.48550/arxiv.2106.11133
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

GraphMixup: Improving Class-Imbalanced Node Classification on Graphs by Self-supervised Context Prediction

Abstract: Recent years have witnessed great success in handling node classification tasks with Graph Neural Networks (GNNs). However, most existing GNNs are based on the assumption that node samples for different classes are balanced, while for many real-world graphs, there exists the problem of class imbalance, i.e., some classes may have much fewer samples than others. In this case, directly training a GNN classifier with raw data would under-represent samples from those minority classes and result in sub-optimal perf… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(14 citation statements)
references
References 22 publications
0
14
0
Order By: Relevance
“…[45] take a further step to unify these approaches into a Bayesian graph learning framework. Besides, an advanced data augmentation strategy namely Mixup is recently applied to DGL and is proved to be effective for several graph-related tasks [115,123,128]. On the other hand, there are some studies trying to explicitly impose regularization to the hypothesis space, i.e., building GNN models with some predefined constrains or inductive bias, which share the similar idea with approaches demonstrated in Section 2.4.2. p-Laplacian based GNNs [37] introduces a discrete p-Laplacian regularization framework to derive a new message passing scheme and impose GNNs to be effective for both homophilic and heterophilic graphs as well as improved the robustness on graphs with noisy edges.…”
Section: Enhancing Techniquesmentioning
confidence: 99%
“…[45] take a further step to unify these approaches into a Bayesian graph learning framework. Besides, an advanced data augmentation strategy namely Mixup is recently applied to DGL and is proved to be effective for several graph-related tasks [115,123,128]. On the other hand, there are some studies trying to explicitly impose regularization to the hypothesis space, i.e., building GNN models with some predefined constrains or inductive bias, which share the similar idea with approaches demonstrated in Section 2.4.2. p-Laplacian based GNNs [37] introduces a discrete p-Laplacian regularization framework to derive a new message passing scheme and impose GNNs to be effective for both homophilic and heterophilic graphs as well as improved the robustness on graphs with noisy edges.…”
Section: Enhancing Techniquesmentioning
confidence: 99%
“…Recently, some efforts have been made to improve the imbalanced node classification [7], [18]- [21]. For instance, DPGNN [19] proposes a class prototype-driven training loss to maintain the balance of different classes.…”
Section: A Class Imbalance Problemmentioning
confidence: 99%
“…As the mixing of graph topology is not well-defined, and mixed nodes may interfere with each other, it is non-trivial to apply this technique to the graph domain. There have been some attempts addressing these difficulties [21], [46], [47]. For example, [46] uses a separate MLP network to conduct mixup and transfer the knowledge to the graph neural network.…”
Section: Mixupmentioning
confidence: 99%
See 1 more Smart Citation
“…It is forthwith another problem: it is hard to identify the subgroup and give them special treatment without enough data about the subgroup users within the favored group. Prior researchers had done lots of works on data augmentation for graph data, such as GraphSomte [46], GraphMixup [40,41], GraphCrop [39], which expanded the training set by generating synthetic data. However, most data augmentation strategies focus on mitigating the class-imbalance issue but are not designed to provide data to help detect subgroups.…”
Section: Computational Graphs Of the Gnnmentioning
confidence: 99%