2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS) 2018
DOI: 10.1109/focs.2018.00073
|View full text |Cite
|
Sign up to set email alerts
|

Sublinear Algorithms for Local Graph Centrality Estimation

Abstract: We study the complexity of local graph centrality estimation, with the goal of approximating the centrality score of a given target node while exploring only a sublinear number of nodes/arcs of the graph and performing a sublinear number of elementary operations. We develop a technique, that we apply to the PageRank and Heat Kernel centralities, for building a low-variance score estimator through a local exploration of the graph. We obtain an algorithm that, given any node in any graph of m arcs, with probabil… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
26
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 14 publications
(26 citation statements)
references
References 47 publications
0
26
0
Order By: Relevance
“…Recall that in the proof of Theorem C.2, we show that according to the assumption on the non-negativity of 𝒙 and the bound given in inequality(8), we only need to approximate the prefix sum𝝅 𝐿 = 𝐿 𝑖=0 𝑤 𝑖 • D −𝑎 AD −𝑏 𝑖• 𝒙 such that for any 𝑣 ∈ 𝑉 with 𝝅 𝐿 (𝑣) >18 19 •𝛿, we have |𝝅 𝐿 (𝑣) − π (𝑣)| ≤ 1 20 𝝅 𝐿 (𝑣) holds with high probability. Hence, Theorem C.2 holds for all the proximity models discussed in this paper without Assumption 3.1, and this lemma follows.We first prove the unbiasedness of the estimated residue vector r (ℓ) for each level ℓ ∈ [0, 𝐿].…”
mentioning
confidence: 86%
See 3 more Smart Citations
“…Recall that in the proof of Theorem C.2, we show that according to the assumption on the non-negativity of 𝒙 and the bound given in inequality(8), we only need to approximate the prefix sum𝝅 𝐿 = 𝐿 𝑖=0 𝑤 𝑖 • D −𝑎 AD −𝑏 𝑖• 𝒙 such that for any 𝑣 ∈ 𝑉 with 𝝅 𝐿 (𝑣) >18 19 •𝛿, we have |𝝅 𝐿 (𝑣) − π (𝑣)| ≤ 1 20 𝝅 𝐿 (𝑣) holds with high probability. Hence, Theorem C.2 holds for all the proximity models discussed in this paper without Assumption 3.1, and this lemma follows.We first prove the unbiasedness of the estimated residue vector r (ℓ) for each level ℓ ∈ [0, 𝐿].…”
mentioning
confidence: 86%
“…In the proximity model of PageRank, PPR, HKPR and transition probability, we set 𝑎 = 𝑏 = 1 and the transition probability matrix is D −𝑎 AD −𝑏 = AD −1 . Thus, the left side of inequality (8) becomes…”
Section: C2 Further Explanations On Assumption 31mentioning
confidence: 99%
See 2 more Smart Citations
“…The algorithm runs in Õ ( 𝑛 Δ ) time. Bressan et al developed a sub-linear time algorithm that employs local graph exploration [7]. Their algorithm (1 + 𝜀)-approximates the PageRank of a vertex on a directed graph.…”
Section: Related Workmentioning
confidence: 99%