Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing 2022
DOI: 10.1145/3519935.3520009
|View full text |Cite
|
Sign up to set email alerts
|

Sublinear time spectral density estimation

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 21 publications
0
3
0
Order By: Relevance
“…We note that the additive error eigenvalue approximation result of Theorem 1 (analogously Theorems 3 and 4) directly gives an ϵn approximation to the spectral density in the Wasserstein distance -extending the above results to a much broader class of matrices. When ∥A∥ ∞ ≤ 1, A can have eigenvalues as large as n, while the normalized adjacency matrices studied in [13,11] have eigenvalues in [−1, 1]. So, while the results are not directly comparable, our Wasserstein error can be thought as on order of their error of ϵ after scaling.…”
Section: Related Workmentioning
confidence: 95%
See 1 more Smart Citation
“…We note that the additive error eigenvalue approximation result of Theorem 1 (analogously Theorems 3 and 4) directly gives an ϵn approximation to the spectral density in the Wasserstein distance -extending the above results to a much broader class of matrices. When ∥A∥ ∞ ≤ 1, A can have eigenvalues as large as n, while the normalized adjacency matrices studied in [13,11] have eigenvalues in [−1, 1]. So, while the results are not directly comparable, our Wasserstein error can be thought as on order of their error of ϵ after scaling.…”
Section: Related Workmentioning
confidence: 95%
“…Recent work has studied sublinear time spectral density estimation for graph structured matrices -Braverman, Krishnan, and Musco [11] show that the spectral density of a normalized graph adjacency or Laplacian matrix can be estimated to ϵ error in the Wasserstein distance in Õ(n/ poly(ϵ)) time. Cohen-Steiner, Kong, Sohler, and Valiant study a similar setting, giving runtime 2 O(1/ϵ) [13].…”
Section: Related Workmentioning
confidence: 99%
“…The standard literature either considers an additive error (where, unlike usual models like floating-point arithmetic, each multiplication incurs identical error regardless of magnitude) [Cle55,FP68,MH02] or eventually boils down to bounding |a i | (since their main concern is dependence on x) [Oli77,Oli79], which is insufficient to get our d 2 log(d) stability bound. The modern work we are aware of shows a O(d 2 ) bound only for Chebyshev polynomials [BKM22], sometimes used to give a O(d 3 ) bound for computing generic bounded polynomials [MMS18], since a degree-d polynomial can be written as a linear combination of T k (x) with bounded coefficients.…”
Section: Technical Overviewmentioning
confidence: 99%