2013
DOI: 10.1109/lsp.2013.2258912
|View full text |Cite
|
Sign up to set email alerts
|

Dictionary Training for Sparse Representation as Generalization of K-Means Clustering

Abstract: Recent dictionary training algorithms for sparse representation like -SVD, MOD, and their variation are reminiscent of -means clustering, and this letter investigates such algorithms from that viewpoint. It shows: though -SVD is sequential like -means, it fails to simplify to -means by destroying the structure in the sparse coefficients. In contrast, MOD can be viewed as a parallel generalization of -means, which simplifies to -means without perturbing the sparse coefficients. Keeping memory usage in mind, we … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
55
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 111 publications
(55 citation statements)
references
References 5 publications
0
55
0
Order By: Relevance
“…It is necessary to develop an algorithm with lower computation cost by replacing each procedure with a high-speed procedure. For example, it has been reported that the computation cost of the KSVD algorithm used in SP-SIP-DIMS is high, and this problem can be solved by much faster versions such as those in [73] and [74]. These issues will be the subject of subsequent reports.…”
Section: Discussionmentioning
confidence: 99%
“…It is necessary to develop an algorithm with lower computation cost by replacing each procedure with a high-speed procedure. For example, it has been reported that the computation cost of the KSVD algorithm used in SP-SIP-DIMS is high, and this problem can be solved by much faster versions such as those in [73] and [74]. These issues will be the subject of subsequent reports.…”
Section: Discussionmentioning
confidence: 99%
“…Many sparse coding algorithms have been proposed [5]. Currently, the dictionary learning algorithms include method of optimal directions (MOD) [9], Kmeans singular value decomposition (KSVD) [10] or discriminative KSVD (DKSVD) [11], and sequential generalization of K-means (SGK) [12]. The main differences among them are in the second part, in which the dictionary is updated to reduce the representation error of stage 1.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…(8), the SGK method [12] is used in the LRE-DLA to improve efficiency. The final classification of a test sample is based on its sparse code on the learned dictionary and the learned classifier.…”
Section: = Arg Minmentioning
confidence: 99%
“…The square of the Frobenius nonn was used in [1] to derive a sequential dic tionary update stage. Other examples of sequential dictionary learning algorithms can be found in [12] [13]. The sparsity constraint used in the sparse coding stage is the pillar of any dictionary learning algorithm.…”
Section: Muhammad Hanifmentioning
confidence: 99%