2017
DOI: 10.1007/s10589-017-9898-5
|View full text |Cite
|
Sign up to set email alerts
|

$$S_{1/2}$$ S 1 / 2 regularization methods and fixed point algorithms for affine rank minimization problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 22 publications
(12 citation statements)
references
References 36 publications
0
12
0
Order By: Relevance
“…⊤ is an optimal solution to (29), and c j is positive value and μ satisfies 0 < μ ≤ ‖X * ‖ − 2 2 , and then, the optimal solution β * ji can be given by…”
Section: Lemmamentioning
confidence: 99%
“…⊤ is an optimal solution to (29), and c j is positive value and μ satisfies 0 < μ ≤ ‖X * ‖ − 2 2 , and then, the optimal solution β * ji can be given by…”
Section: Lemmamentioning
confidence: 99%
“…Its roots (complex or real) and their computational formulae have a long history with fascinating and entertaining stories. A comprehensive revisit of this subject can be found in Xing [51] and a successful application of the depressed cubic equation to the compressed sensing can be found in [36,52]. The following lemma says that, under certain conditions, the Eq.…”
Section: Positive Roots Of Depressed Cubic Equationsmentioning
confidence: 99%
“…Peng et al [46] have shown that the q (0 < q < 1) norm is efficient for the lowrank matrix recovery problem. Considering the q (0 < q < 1) norm for the LLR problem, two questions should be answered.…”
Section: A 1/2 Regularization-based Relaxation Of Rank Minimization mentioning
confidence: 99%
“…It is noteworthy to point out that the rank of the mean filtering matrix is always equal to 1 under the LRR framework (the minimum rank to be reached), whereas the rank obtained by LRR method is usually higher than the real one. Since the proposed LhalfLRR method can achieve lower rank than LRR (as shown in the experiment section), and the LhalfLRR model is more robust than LRR model on noise [46]- [49], it can be a better choice for spatial information fusion of hyperspectral imagery. Through applying the proposed LhalfLRR approach on each pixel of hyperspectral imagery, the spatial structure information can be intelligently incorporated into the center and eventually a processed data cube R * can be obtained (only the center pixel of S is reserved for reconstruction while the rest pixels are omitted, and the size of R * is the same as the original hyperspectral data cube R).…”
Section: B Solving the 1/2 Regularized Optimization Problemmentioning
confidence: 99%