2019
DOI: 10.1007/s12559-019-09650-2
|View full text |Cite
|
Sign up to set email alerts
|

Joint Sparse Regularization for Dictionary Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 50 publications
0
4
0
Order By: Relevance
“…+ γtr (P 2 ⊗ P 1 ) T P D (P 2 ⊗ P 1 ) + λ 3 D y P 2 2,1 (16) where Y (2) and Z (2) are the height-mode (2-mode) unfolding matrix of tensors Y and Z, respectively, A 2 = C × 1 P1 × 3 P 3 (2) , and…”
Section: Optimization Of Pmentioning
confidence: 99%
See 2 more Smart Citations
“…+ γtr (P 2 ⊗ P 1 ) T P D (P 2 ⊗ P 1 ) + λ 3 D y P 2 2,1 (16) where Y (2) and Z (2) are the height-mode (2-mode) unfolding matrix of tensors Y and Z, respectively, A 2 = C × 1 P1 × 3 P 3 (2) , and…”
Section: Optimization Of Pmentioning
confidence: 99%
“…Therefore, all these four subproblems can be effectively solved using fast and accurate ADMM technique. Due to the similarity of the solution process of problems ( 14), (16), and ( 18), we include the solution details of the four subproblems and the optimization updates of each variable as appendices for more conciseness. In Appendix A, Algorithms A1-A4 draw a summary of the solution process of the four subproblems in (12).…”
Section: Optimization Of Cmentioning
confidence: 99%
See 1 more Smart Citation
“…By its definition, the 2,1 -norm encourages row sparsity, i.e., it enforces entire row of the matrix to have zero elements. It has been successfully used in feature selection [47], [48], dictionary learning [49], multi-task learning [50], [51], and multi-class classification [52], [53]. Nie et al [47] developed a feature selection model via the 2,1 norm joint minimization on the loss function and spare regularization.…”
Section: B Group Sparsitymentioning
confidence: 99%