2020
DOI: 10.1111/rssb.12387
|View full text |Cite
|
Sign up to set email alerts
|

False Discovery and its Control in Low Rank Estimation

Abstract: Summary    Models specified by low rank matrices are ubiquitous in contemporary applications. In many of these problem domains, the row–column space structure of a low rank matrix carries information about some underlying phenomenon, and it is of interest in inferential settings to evaluate the extent to which the row–column spaces of an estimated low rank matrix signify discoveries about the phenomenon. However, in contrast with variable selection, we lack a formal framework to assess true or false discoverie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2

Relationship

3
2

Authors

Journals

citations
Cited by 6 publications
(15 citation statements)
references
References 23 publications
0
15
0
Order By: Relevance
“…Using the relation (30) and a similar analysis as with the proof under Assumptions 1-5 in (10), one can arrive at the conclusion of Lemma 2 with Assumptions 1-5 in (11).…”
Section: A2 Role Of H In Identifiabilitymentioning
confidence: 83%
See 1 more Smart Citation
“…Using the relation (30) and a similar analysis as with the proof under Assumptions 1-5 in (10), one can arrive at the conclusion of Lemma 2 with Assumptions 1-5 in (11).…”
Section: A2 Role Of H In Identifiabilitymentioning
confidence: 83%
“…Further, we examine the effect of the estimated hidden variable on the proteins. Specifically, following [30], we compute the top singular vector of the average projection matrix (computed across the seven settings) onto the 1-dimensional hidden variable subspace. The magnitude of the entries of this singular vector are below 0.015 for all proteins except PKC, P38, and JNK, whose magnitudes are all above 0.5.…”
Section: Real Experiments With Protein Expression Datasetmentioning
confidence: 99%
“…The methods presented in [14,17] may be described in terms of this matrix, and they were applicable to discrete model selection problems such as graph estimation and variable selection. These ideas were extended recently to low-rank estimation problems based on a geometric reformulation of model selection [21], with a key ingredient being a suitable generalization of the aggregate matrix P graph λ,γ . Specifically, for each L(l) λ,γ let P (l) λ,γ ⊂ S d denote the projection operator onto the column-space of L(l) λ,γ .…”
Section: Model Selection Via Stabilitymentioning
confidence: 99%
“…Stage 2: Identifying Model Structure Solving (2.2) or (2.11) with the regularization parameters obtained from the preceding step tends to lead to models that have small type-II error (formally [14] shows that type-II error in graph structure estimation is small under minimal assumptions). However, to also reduce type-I error it is useful to further restrict the models selected based on a more refined form of stability, as described in [17,21]. Specifically, while the approach of [14] considers aggregate variability, the methods in [17,21] suggest selecting a graphical model structure and a latent subspace that are common to a large proportion of the subsamples.…”
Section: Model Selection Via Stabilitymentioning
confidence: 99%
See 1 more Smart Citation