2019
DOI: 10.1017/apr.2019.10
|View full text |Cite
|
Sign up to set email alerts
|

Singular vector distribution of sample covariance matrices

Abstract: We consider a class of sample covariance matrices of the form Q = T XX * T * , where X = (xij) is an M × N rectangular matrix consisting of i.i.d entries and T is a deterministic matrix satisfying T * T is diagonal. Assuming M is comparable to N , we prove that the distribution of the components of the singular vectors close to the edge singular values agrees with that of Gaussian ensembles provided the first two moments of xij coincide with the Gaussian random variables. For the singular vectors associated wi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1

Relationship

3
4

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 37 publications
0
5
0
Order By: Relevance
“…Remark 2.6. The above assumption has previously appeared in [6,10,11,21]. It guarantees a regular square-root behavior of the spectral densities ρ 1,2c near λ r (see Lemma 3.6 below), which is used to prove the local deformed MP law at the soft edge.…”
Section: )mentioning
confidence: 89%
See 1 more Smart Citation
“…Remark 2.6. The above assumption has previously appeared in [6,10,11,21]. It guarantees a regular square-root behavior of the spectral densities ρ 1,2c near λ r (see Lemma 3.6 below), which is used to prove the local deformed MP law at the soft edge.…”
Section: )mentioning
confidence: 89%
“…For this reason, the bad event {|x ij | ≥ q for some i, j} is negligible, and we will not consider the case it happens throughout the proof. Next we introduce a convenient self-adjoint linearization trick, which has been proved to be useful in studying the local laws of deformed sample random matrices [10,21,38]. We define the following (N + M ) × (N + M ) block matrix, which is a linear function of X.…”
Section: )mentioning
confidence: 99%
“…holds with high probability. Furthermore, denote ξ i , ζ i as the singular vectors of X, for some large constant C > 0, with high probability, we have [8] max…”
Section: Introductionmentioning
confidence: 99%
“…To illustrate our results and ideas, we give an overview of the present paper. As we have seen from [8,10], the self-adjoint linearization technique is quite useful in dealing with rectangular matrices. Hence, in a first step, we denote by…”
Section: Introductionmentioning
confidence: 99%
“…Gaussian X in [17,40]; the edge universality was later proved under various moment assumptions on the entries x ij [5,14,30,34]. Finally, for the (non-outlier) sample eigenvectors, the completely delocalization [30,43], quantum unique ergodicity [8], distribution of the eigenvector components [13] and convergence of eigenvector empirical spectral distribution [56] have been constructed.…”
Section: Introductionmentioning
confidence: 99%