2018
DOI: 10.1214/17-aos1628
|View full text |Cite
|
Sign up to set email alerts
|

Finding a large submatrix of a Gaussian random matrix

Abstract: We consider the problem of finding a k × k submatrix of an n × n matrix with i.i.d. standard Gaussian entries, which has a large average entry. It was shown in [BDN12] using non-constructive methods that the largest average value of a k × k submatrix is 2(1 + o(1)) log n/k with high probability (w.h.p.) when k = O(log n/ log log n). In the same paper an evidence was provided that a natural greedy algorithm called Largest Average Submatrix (LAS) for a constant k should produce a matrix with average entry at mos… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
18
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 20 publications
(18 citation statements)
references
References 21 publications
(35 reference statements)
0
18
0
Order By: Relevance
“…[3], and consists in consecutive updates of k rows and k columns, starting from a random k × k submatrix and repeating the updates until a guaranteed convergence to a local maximum, meaning that the resulting submatrix can not be improved by changing only its column or row set. A recently introduced improved version of this algorithm, analysed in [16] and named Iterative Greedy Procedure (IGP) follows a simple greedy scheme: starting by one randomly chosen row, we add the best columns and rows sequentially until a k × k submatrix is recovered. This algorithm outputs a provably better results, at least in the case of large Gaussian random matrices.…”
Section: Localization Via Biclustering Methodsmentioning
confidence: 99%
“…[3], and consists in consecutive updates of k rows and k columns, starting from a random k × k submatrix and repeating the updates until a guaranteed convergence to a local maximum, meaning that the resulting submatrix can not be improved by changing only its column or row set. A recently introduced improved version of this algorithm, analysed in [16] and named Iterative Greedy Procedure (IGP) follows a simple greedy scheme: starting by one randomly chosen row, we add the best columns and rows sequentially until a k × k submatrix is recovered. This algorithm outputs a provably better results, at least in the case of large Gaussian random matrices.…”
Section: Localization Via Biclustering Methodsmentioning
confidence: 99%
“…In other words, the one-dimensional set of distances between the near optimal states is disconnected. This property has been established for various problems arising from theoretical computer science and combinatorial optimization, for instance random constraint satisfaction [2,33,55], Max Independent Set [32,66], and a maxcut problem on hypergraphs [24]. Further, OGP has been shown to act as a barrier to the success of a family of "local algorithms" on sparse random graphs [26,24,32,33].…”
Section: Introductionmentioning
confidence: 99%
“…This problem has natural connections to the planted clique problem [5], sparse PCA [6], biclustering [17,32,69], and community detection [1,56,59]. All these problems are expected to exhibit a statistical-computational gap-there are regimes where optimal statistical performance might be impossible to achieve using computationally feasible statistical procedures.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations