2019
DOI: 10.48550/arxiv.1902.03373
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

An Optimal-Storage Approach to Semidefinite Programming using Approximate Complementarity

Abstract: This paper develops a new storage-optimal algorithm that provably solves generic semidefinite programs (SDPs) in standard form. This method is particularly effective for weakly constrained SDPs. The key idea is to formulate an approximate complementarity principle: Given an approximate solution to the dual SDP, the primal SDP has an approximate solution whose range is contained in the eigenspace with small eigenvalues of the dual slack matrix. For weakly constrained SDPs, this eigenspace has very low dimension… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 10 publications
(14 citation statements)
references
References 77 publications
0
14
0
Order By: Relevance
“…If the selected value of rank r satisfies r(r + 1) ≥ 2d and the constraint set is a smooth manifold, then any second-order critical point of the nonconvex problem is a global optimum [8]. Another approach, that requires Θ(d + nr) working memory, is to first determine (approximately) the subspace in which the (low) rank-r solution to an SDP lies and then solve the problem over the (low) r-dimensional subspace [11].…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…If the selected value of rank r satisfies r(r + 1) ≥ 2d and the constraint set is a smooth manifold, then any second-order critical point of the nonconvex problem is a global optimum [8]. Another approach, that requires Θ(d + nr) working memory, is to first determine (approximately) the subspace in which the (low) rank-r solution to an SDP lies and then solve the problem over the (low) r-dimensional subspace [11].…”
Section: Literature Reviewmentioning
confidence: 99%
“…In each case, the decision variable is an n × n matrix and there are d = Ω(n 2 ) constraints. While reducing the memory bottleneck for large-scale SDPs has been studied quite extensively in literature [9,11,19,37], all these methods use memory that scales linearly with the number of constraints and also depends on either the rank of the optimal solution or an approximation parameter. A recent Gaussian-sampling based technique to generate a near-optimal, near-feasible solution to SDPs with smooth objective function involves replacing the decision variable X with a zero-mean random vector whose covariance is X [28].…”
Section: Introductionmentioning
confidence: 99%
“…Alternatively, Ding et al [11] compute a low-rank solution to an SDP with linear equality constraints by first approximately finding the subspace in which the solution lies. This subspace is computed by finding the null space of the dual slack variable.…”
Section: Related Work On Low Memory Algorithms For Sdpmentioning
confidence: 99%
“…This involves maintaining a lower dimensional sketch of the decision variable, while preserving the convexity of the problem formulation. This approach has primarily been developed in cases where either we know an a priori bound on the rank of the solution [11] or the aim is to generate a low-rank approximation of the solution [35].…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, prox-operators for some complex penalty functions are computationally expensive and it may be more efficient to instead use subgradients. For example, proximal operator for the maximum eigenvalue function that appears in dual-form semidefinite programs (e.g., see Section 6.1 in (Ding et al, 2019)) may require computing a full eigendecomposition (with a cubic arithmetic cost). In contrast, we can form a subgradient by computing only the top eigenvector via power method or Lanczos algorithm.…”
Section: Introductionmentioning
confidence: 99%