2016 IEEE International Conference on the Science of Electrical Engineering (ICSEE) 2016
DOI: 10.1109/icsee.2016.7806064
|View full text |Cite
|
Sign up to set email alerts
|

Graph-constrained supervised dictionary learning for multi-label classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 22 publications
0
5
0
Order By: Relevance
“…The following algorithms belong to the second category: the random walk-based MLC algorithms, 16,17,19,20,22,23 the MLC algorithm based on Hilbert-Schmidt independence criterion, 24 and the dictionary learning-based DL-MLC algorithm. 25 Specifically, the graph DL-MLC algorithm 25 maps a training set to a graph model and improves the labelconsistent K-SVD algorithm (LC-KSVD) 37 with adopting the graph Laplacian regularization. The genomescale metabolic model (GSMM) method 24 contains three steps: first, it converts the training and test sets to a weighted undirected graph and describes the graph smoothness through a series of transformations including the adjacency matrix, the real label set of training instances, and the predicted label set of test instances; second, the algorithm describes the consistence of label space with the Hilbert-Schmidt independent criterion; third, it builds the MLC classifier through optimizing the smoothness and consistency.…”
Section: Ptms and Aamsmentioning
confidence: 99%
See 1 more Smart Citation
“…The following algorithms belong to the second category: the random walk-based MLC algorithms, 16,17,19,20,22,23 the MLC algorithm based on Hilbert-Schmidt independence criterion, 24 and the dictionary learning-based DL-MLC algorithm. 25 Specifically, the graph DL-MLC algorithm 25 maps a training set to a graph model and improves the labelconsistent K-SVD algorithm (LC-KSVD) 37 with adopting the graph Laplacian regularization. The genomescale metabolic model (GSMM) method 24 contains three steps: first, it converts the training and test sets to a weighted undirected graph and describes the graph smoothness through a series of transformations including the adjacency matrix, the real label set of training instances, and the predicted label set of test instances; second, the algorithm describes the consistence of label space with the Hilbert-Schmidt independent criterion; third, it builds the MLC classifier through optimizing the smoothness and consistency.…”
Section: Ptms and Aamsmentioning
confidence: 99%
“…For example, considering the application of graph representation to the MLC methods to cope with the above-mentioned problems. However, little progress has been made with these methods and algorithms [15][16][17][18][19][20][21][22][23][24][25][26][27][28][29] especially in the following aspects. (1) Complexity: in order to construct a graph model, a graph-based MLC method must map the instances of an entire training set to graph vertices, with many instances irrelevant to the test instances causing very high requirements for time and space and causing high computational complexity.…”
Section: Introductionmentioning
confidence: 99%
“…The DGRDL algorithm and its extensions to a supervised setting [9], [10] already exhibit very good performance in various applications. Nevertheless, a significant limitation common to all current dictionary learning methods for graph signals (including DGRDL), is their poor scalability to high dimensional data, which is limited by the complexity of the training problem as well as by the use of the large graph Laplacian matrices.…”
Section: Introductionmentioning
confidence: 99%
“…Second, discriminative regularizations can be applied on the sparse coding vectors which are directly responsible for classification. The sparse codes can be regularized via class discrimination [20], [51], label consistency [18], graph regularization [21], [23], support discrimination [25], [52], [53],…”
Section: Objectives and Contributionsmentioning
confidence: 99%
“…A common approach is to transform the coding vectors into their class labels with a linear regressor, and incorporate the linear prediction loss into the reconstruction loss [20], [51]. It is also favorable to require the coding vectors to be similar via different regularizations such as manifold regularization [21], [23], Fisher criterion [47], [202], distance of pairwise support coding vectors [24], etc.…”
Section: Background and Motivationsmentioning
confidence: 99%