2018
DOI: 10.1007/s00222-018-0817-x
|View full text |Cite
|
Sign up to set email alerts
|

The dimension-free structure of nonhomogeneous random matrices

Abstract: Let X be a symmetric random matrix with independent but nonidentically distributed centered Gaussian entries. We show thatwhere Sp denotes the p-Schatten class and the constants are universal. The right-hand side admits an explicit expression in terms of the variances of the matrix entries. This settles, in the case p = ∞, a conjecture of the first author, and provides a complete characterization of the class of infinite matrices with independent Gaussian entries that define bounded operators on ℓ 2 . Along th… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

1
55
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 53 publications
(56 citation statements)
references
References 14 publications
1
55
0
Order By: Relevance
“…Whenever npnlogn, it is known that Wn=(2+o(1))npn with probability tending to one with n . This result was verified in a series of works where the assumptions on the matrix sparsity and the matrix entries were sequentially relaxed (see [7,22,28,32,49]). On the other hand, when p n → 0 with n relatively fast, one can easily verify that the extreme eigenvalues get asymptotically detached from the bulk of the spectrum.…”
Section: Introductionmentioning
confidence: 73%
See 1 more Smart Citation
“…Whenever npnlogn, it is known that Wn=(2+o(1))npn with probability tending to one with n . This result was verified in a series of works where the assumptions on the matrix sparsity and the matrix entries were sequentially relaxed (see [7,22,28,32,49]). On the other hand, when p n → 0 with n relatively fast, one can easily verify that the extreme eigenvalues get asymptotically detached from the bulk of the spectrum.…”
Section: Introductionmentioning
confidence: 73%
“…More generally, when npnlogn0, this phenomenon was observed in the case of Rademacher variables [28] and in the case of the Erdős–Renyi random graphs [6] which will be discussed later on. In the window around logn it is known that, up to constant multiples, the matrix norm is of order logn [6,7,32,46], however, to the best of our knowledge, there have been no results on its exact asymptotic behavior. In this connection, we can ask the following questions: Is there a sharp phase transition (in terms of sparsity) in the appearance/disappearance of outliers in the semi‐circular law? For concrete distributions, say, sparse Bernoulli matrices, what is the explicit formula for the sparsity threshold (if it exists)? What is a conceptual explanation of why the outliers appear at a particular level of sparsity? What is the exact asymptotic value of an outlier? …”
Section: Introductionmentioning
confidence: 99%
“…The next corollary is a version of Theorem 1.1 in the spirit of the aforementioned results from [17,14,16]. It follows directly from (1.3), and the Jensen inequality.…”
Section: Introduction and Main Resultsmentioning
confidence: 70%
“…when X ij " a ij g ij and g ij are i.i.d. standard Gaussian variables), as was recently shown by Lata la, van Handel, and Youssef in [14], and up to a logarithmic factor for any X with independent centred entries, see [16]. The advance of the two latest results is that they do not require that the entries of X are equally distributed (nor that they have equal variances).…”
Section: Introduction and Main Resultsmentioning
confidence: 92%
“…This procedure relies on an upper bound for the operator norm B , which is of order n 1 2 with exponentially high probability under the subgaussian moment assumption on the entries of B. Moreover, as can be seen in [7,39], one has that B ≤ C √ n under the assumption of bounded fourth moments (see [10,11] for independent but not identically distributed entries). However, in the settings of Theorem 1.1, it is not guaranteed that the operator norm B has a good upper bound.…”
Section: Introductionmentioning
confidence: 99%