2020
DOI: 10.1109/tcad.2020.2968543
|View full text |Cite
|
Sign up to set email alerts
|

GRASS: Graph Spectral Sparsification Leveraging Scalable Spectral Perturbation Analysis

Abstract: Spectral graph sparsification aims to find ultrasparse subgraphs whose Laplacian matrix can well approximate the original Laplacian eigenvalues and eigenvectors. In recent years, spectral sparsification techniques have been extensively studied for accelerating various numerical and graph-related applications. Prior nearly-linear-time spectral sparsification methods first extract low-stretch spanning tree from the original graph to form the backbone of the sparsifier, and then recover small portions of spectral… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 23 publications
(24 citation statements)
references
References 40 publications
0
24
0
Order By: Relevance
“…Several test cases have been tested in the experiments. Table 2 shows the spectral graph sparsification results on various graphs when comparing to the state-of-the-art sparsification tool GRASS 1 [8][9][10], where N (M) represents the number of nodes (edges) in the original graph; T дr ass denotes the the sparsifier construction time using GRASS; T r denotes the multilevel graph coarsening time; T spar denotes the multilevel sparsifier construction time by SF-GRASS; |E of f | denotes the number of off-tree edges added for forming the final sparsifier from the initial spanning-tree sparsifier. κ(L G , L P ) denotes the final relative condition number between the Laplacians of the original graph G and the sparsifier P. κ(L G , L S ) denotes the relative condition number between the Laplacians of the original graph G and the initial spanning-tree sparsifier S generated by SF-GRASS.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Several test cases have been tested in the experiments. Table 2 shows the spectral graph sparsification results on various graphs when comparing to the state-of-the-art sparsification tool GRASS 1 [8][9][10], where N (M) represents the number of nodes (edges) in the original graph; T дr ass denotes the the sparsifier construction time using GRASS; T r denotes the multilevel graph coarsening time; T spar denotes the multilevel sparsifier construction time by SF-GRASS; |E of f | denotes the number of off-tree edges added for forming the final sparsifier from the initial spanning-tree sparsifier. κ(L G , L P ) denotes the final relative condition number between the Laplacians of the original graph G and the sparsifier P. κ(L G , L S ) denotes the relative condition number between the Laplacians of the original graph G and the initial spanning-tree sparsifier S generated by SF-GRASS.…”
Section: Resultsmentioning
confidence: 99%
“…Since spectral sparsification aims to approximate the first few eigenvalues and eigenvectors of the original Laplacian with the minimum number of edges, it can be regarded as a low-pass filter on graphs for removing redundant edges. Spectral sparsification usually involves two steps: the first step is to generate a low-stretch spanning tree (LSST) from the original graph using star or petal decompositions [1,7]; the next step is to identify and recover spectrally-critical off-tree edges into the LSST to drastically reduce the condition number, and thereby minimizing the spectral mismatch [10]. However, prior spectral sparsification methods [8,24] usually require solving linear systems of equations with Laplacian solvers, which can still be computationally challenging for large problems.…”
Section: Overview Of Our Approachmentioning
confidence: 99%
See 1 more Smart Citation
“…node aggregation set at level l − 1 with respect to the single node i at level l reduce the condition number, and thereby minimizing the spectral mismatch [10]. However, prior spectral sparsification methods [8,24] usually require solving linear systems of equations with Laplacian solvers, which can still be computationally challenging for large problems.…”
Section: Overview Of Our Approachmentioning
confidence: 99%
“…By properly choosing g min , it is possible to filter out the graph signal's "high-frequency" (highly-oscillating) components and only keep "low-frequency" components in x. Since chip temperature distributions mainly contain "slowly-varying" "low-frequency") components due to relative small ambient thermal conductance values, it is thus possible to exploit emerging spectral sparsification techniques [27,28,29] to retain a small number of edges in the sparsified thermal grids while still preserving accurate thermal profiles, since spectrally-sparsified graphs can well-preserve "lowfrequency" graph signals. Based on the above intuition, Algorithm 7 is proposed for scaling up edge weights in the sparsified thermal grid by matching the "low-frequency" responses filtered by the original thermal grids.…”
Section: Algorithm 7 Algorithm For Iterative Edge Weight Scalingmentioning
confidence: 99%