2013
DOI: 10.1137/100814251
|View full text |Cite
|
Sign up to set email alerts
|

Finding Approximately Rank-One Submatrices with the Nuclear Norm and $\ell_1$-Norm

Abstract: We propose a convex optimization formulation with the nuclear norm and ℓ 1 -norm to find a large approximately rank-one submatrix of a given nonnegative matrix. We develop optimality conditions for the formulation and characterize the properties of the optimal solutions. We establish conditions under which the optimal solution of the convex formulation has a specific sparse structure. Finally, we show that, under certain hypotheses, with high probability, the approach can recover the rank-one submatrix even wh… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
38
0

Year Published

2014
2014
2017
2017

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 14 publications
(39 citation statements)
references
References 19 publications
1
38
0
Order By: Relevance
“…However, their target applications were quite different from ours: finding rank-one submatrices (Doan and Vavasis 2013), multi-task learning (Mei et al 2012), matrix completion (Richard et al 2012), and estimating block or community structures in a single, static network (Zhou et al 2013). Algorithms similar to our ADMM algorithm were developed in Richard et al (2012), Zhou et al (2013).…”
Section: Related Studiesmentioning
confidence: 91%
See 1 more Smart Citation
“…However, their target applications were quite different from ours: finding rank-one submatrices (Doan and Vavasis 2013), multi-task learning (Mei et al 2012), matrix completion (Richard et al 2012), and estimating block or community structures in a single, static network (Zhou et al 2013). Algorithms similar to our ADMM algorithm were developed in Richard et al (2012), Zhou et al (2013).…”
Section: Related Studiesmentioning
confidence: 91%
“…Joint 1 -and nuclear-norm regularization has been studied recently in e.g., Doan and Vavasis (2013), Richard et al (2012), Mei et al (2012), Zhou et al (2013), independently from our preliminary work (Hirayama et al 2010). However, their target applications were quite different from ours: finding rank-one submatrices (Doan and Vavasis 2013), multi-task learning (Mei et al 2012), matrix completion (Richard et al 2012), and estimating block or community structures in a single, static network (Zhou et al 2013).…”
Section: Related Studiesmentioning
confidence: 91%
“…We start with the following general norm minimization problem, which has been considered in [6]. min |X| s.t.…”
Section: Matrix Norm Minimizationmentioning
confidence: 99%
“…These two optimization problems are closely related and their relationship is captured in the following lemmas and theorem discussed in Doan and Vavasis [6]. Lemma 1.…”
Section: Matrix Norm Minimizationmentioning
confidence: 99%
“…Indeed, this property is key for the results of Doan and Vavasis [12] and Doan, Toh and Vavasis [11], who use the joint norm · 1, * to find hidden rank-one blocks inside large matrices. We will elaborate on the significance of this result further at the end of this section.…”
Section: Recovering Sparse Rank One Matrices With the Joint Normmentioning
confidence: 99%