2019
DOI: 10.14778/3324301.3324307
|View full text |Cite
|
Sign up to set email alerts
|

Multi-dimensional balanced graph partitioning via projected gradient descent

Abstract: Motivated by performance optimization of large-scale graph processing systems that distribute the graph across multiple machines, we consider the balanced graph partitioning problem. Compared to most of the previous work, we study the multi-dimensional variant when balance according to multiple weight functions is required. As we demonstrate by experimental evaluation, such multi-dimensional balance is essential for achieving performance improvements for typical distributed graph processing workloads.We propos… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 15 publications
(5 citation statements)
references
References 35 publications
0
5
0
Order By: Relevance
“…To train a GNN, we rely on the technique called message passing, we refer the readers to Appendix A of [10] for more details. 2…”
Section: Preliminaries 21 Graph Neural Networkmentioning
confidence: 99%
See 2 more Smart Citations
“…To train a GNN, we rely on the technique called message passing, we refer the readers to Appendix A of [10] for more details. 2…”
Section: Preliminaries 21 Graph Neural Networkmentioning
confidence: 99%
“…To address this issue, after each gradient descent iteration, we map the negative importance score back to 0. The mapping the negative importance scores to 0 follows the general idea of projected gradient descent (PGD) [2]. In addition, the summation of the importance scores could deviate from 1.…”
Section: Learning-based Aggregation (Lbaggr)mentioning
confidence: 99%
See 1 more Smart Citation
“…Scaling up B++&C. One of the main advantages of B++&C is its efficiency. The algorithm is a gradient descent approach inspired by Avdiukhin, Pupyrev, and Yaroslavtsev (2019) applied to a quadratic function.Using a technique which we refer to as inverse kernel trick (see Section 2), for several widely used similarities and distance measures, we can represent A as a product of low-rank matrices, which allows us to compute the gradient efficiently.…”
Section: Our Contributionsmentioning
confidence: 99%
“…Our algorithm is based on the approach described in Avdiukhin, Pupyrev, and Yaroslavtsev [2019]: we optimize a continuous relaxation (x i ∈ [−1, 1]) of the function above. Algorithm 1 is a projected gradient descent approach which optimizes f (x) = x ⊤ W x under constraints x i ∈ [−1, 1] and i x i = 2δn.…”
Section: Bisect++ and Conquermentioning
confidence: 99%