2017
DOI: 10.1137/14099379x
|View full text |Cite
|
Sign up to set email alerts
|

Finding Low-rank Solutions of Sparse Linear Matrix Inequalities using Convex Optimization

Abstract: This paper is concerned with the problem of finding a low-rank solution of an arbitrary sparse linear matrix inequality (LMI). To this end, we map the sparsity of the LMI problem into a graph. We develop a theory relating the rank of the minimum-rank solution of the LMI problem to the sparsity of its underlying graph. Furthermore, we propose three graph-theoretic convex programs to obtain a low-rank solution. Two of these convex optimization problems need a tree decomposition of the sparsity graph, which is an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 31 publications
(17 citation statements)
references
References 53 publications
0
17
0
Order By: Relevance
“…So, for example when G is a tree, r * ≤ 2. Similar approaches and extensions can be found in [27,19,20]. In fact, [19] proves that any polynomial optimization problem can be reformulated as a QCQP with a corresponding SDP relaxation having r * ≤ 2.…”
Section: Rank Boundsmentioning
confidence: 81%
“…So, for example when G is a tree, r * ≤ 2. Similar approaches and extensions can be found in [27,19,20]. In fact, [19] proves that any polynomial optimization problem can be reformulated as a QCQP with a corresponding SDP relaxation having r * ≤ 2.…”
Section: Rank Boundsmentioning
confidence: 81%
“…DefineX opt (n) as the n × n principal submatrix of an optimal solution of (4). It is shown in [80] that:…”
Section: Matrix Completionmentioning
confidence: 99%
“…While the technique is only applicable to chordal SDPs with bounded treewidths, it is able to reduce the cost of a size-n SDP all the way down to the cost of a size-n linear program, sometimes as low as O(τ 3 n). Indeed, chordal sparsity can be guaranteed in many important applications [32], [80], and software exist to automate the chordal reformulation [183].…”
Section: E Other Specialized Algorithmsmentioning
confidence: 99%
“…Numerical rounding on the eigenvalues of X k has already been used to reduce of rank(X), but penalizing rank(X k ) in optimization was not considered [12]. Minimum rank completions over linear matrix inequalities with general graphs has been performed in the context of optimal power flow, but few details were mentioned about how to penalize the rank of tree components [13].…”
Section: Introductionmentioning
confidence: 99%