2017 IEEE 24th International Conference on High Performance Computing (HiPC) 2017
DOI: 10.1109/hipc.2017.00012
|View full text |Cite
|
Sign up to set email alerts
|

Shared-Memory Graph Truss Decomposition

Abstract: We present PKT, a new shared-memory parallel algorithm and OpenMP implementation for the truss decomposition of large sparse graphs. A k-truss is a dense subgraph definition that can be considered a relaxation of a clique. Truss decomposition refers to a partitioning of all the edges in the graph based on their k-truss membership. The truss decomposition of a graph has many applications. We show that our new approach PKT consistently outperforms other truss decomposition approaches for a collection of large sp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
12
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(13 citation statements)
references
References 34 publications
(61 reference statements)
1
12
0
Order By: Relevance
“…They present speedups to 23.6x on friendster graph with 72 threads. Regarding the k-truss decomposition, the HPEC challenge [35] attracted interesting studies that parallelize the computation [41,45,22]. In particular, Shaden et al [41] reports competitive results with respect to the earlier version of our work [36].…”
Section: Scalability and Comparison With Peelingmentioning
confidence: 74%
See 1 more Smart Citation
“…They present speedups to 23.6x on friendster graph with 72 threads. Regarding the k-truss decomposition, the HPEC challenge [35] attracted interesting studies that parallelize the computation [41,45,22]. In particular, Shaden et al [41] reports competitive results with respect to the earlier version of our work [36].…”
Section: Scalability and Comparison With Peelingmentioning
confidence: 74%
“…Our speedup numbers increase with more threads and faster solutions are possible with more cores. Recent results: There is a couple recent studies, concurrent to our work, that introduced new efficient parallel algorithms for k-core [8] and k-truss [41,45,22] decompositions. Dhulipala et al [8] have a new parallel bucket data structure for k-core decomposition that enables work-efficient parallelism, which is not possible with our algorithms.…”
Section: Scalability and Comparison With Peelingmentioning
confidence: 87%
“…However, for large graphs, the hash table is expensive to use and designing an optimum hash function is not a trivial problem. The second type of algorithms use more advanced parallelization techniques on high-performance multi-core machines to significantly reduce the runtime [46][47][48][49]. Memory usage is not the major concern for these parallel programs since they are designed for high-performance machines, which are usually capable of keeping the whole graph as well as the hash table in the main memory.…”
Section: Truss Decompositionmentioning
confidence: 99%
“…However, the cost for the hardware is high. For algorithms that avoid using the hash table (e.g., [46] uses an array-based alternative), we can still find room for optimization on the data structure design to use the memory more efficiently. In short, both the serial and the parallel algorithms have limitations.…”
Section: Truss Decompositionmentioning
confidence: 99%
“…Other works on parallel algorithms for enumerating dense subgraphs from a massive graph include parallel algorithms for enumerating k-cores [39], [40], [41], [42], k-trusses [42], [43], [44], nuclei [42], and distributed memory algorithms for enumerating bicliques [45].…”
Section: Related Workmentioning
confidence: 99%