2017
DOI: 10.48550/arxiv.1702.04463
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Analyzing the Weighted Nuclear Norm Minimization and Nuclear Norm Minimization based on Group Sparse Representation

Abstract: Rank minimization methods have attracted considerable interest in various areas, such as computer vision and machine learning. The most representative work is nuclear norm minimization (NNM), which can recover the matrix rank exactly under some restricted and theoretical guarantee conditions. However, for many real applications, NNM is not able to approximate the matrix rank accurately, since it often tends to over-shrink the rank components. To rectify the weakness of NNM, recent advances have shown that weig… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 61 publications
0
3
0
Order By: Relevance
“…The motivation for such formulation is quite clear, however, the proposed optimization (Eq:11) is generally nonconvex, and is more difficult to solve than the nuclear norm minimization. Fortunately, recent results [34,25,12] in compressed sensing have shown that we can achieve an effective optimal solution to Eq: (11) in the case when 0 ≤ Θ 1 ≤ Θ 2 ≤ .... ≤ Θ K §3.1.…”
Section: Structure Estimationmentioning
confidence: 99%
See 2 more Smart Citations
“…The motivation for such formulation is quite clear, however, the proposed optimization (Eq:11) is generally nonconvex, and is more difficult to solve than the nuclear norm minimization. Fortunately, recent results [34,25,12] in compressed sensing have shown that we can achieve an effective optimal solution to Eq: (11) in the case when 0 ≤ Θ 1 ≤ Θ 2 ≤ .... ≤ Θ K §3.1.…”
Section: Structure Estimationmentioning
confidence: 99%
“…This section provides the mathematical derivation to the optimization proposed in Eq: (11). Our solution use the following theorems and proofs as stated and used in [34,12,7].…”
Section: Optimizationmentioning
confidence: 99%
See 1 more Smart Citation