2017
DOI: 10.1109/tip.2017.2703101
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Label Suppression: A Discriminative and Fast Dictionary Learning With Group Regularization

Abstract: This paper addresses image classification through learning a compact and discriminative dictionary efficiently. Given a structured dictionary with each atom (columns in the dictionary matrix) related to some label, we propose cross-label suppression constraint to enlarge the difference among representations for different classes. Meanwhile, we introduce group regularization to enforce representations to preserve label properties of original samples, meaning the representations for the same class are encouraged… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 27 publications
(18 citation statements)
references
References 57 publications
0
18
0
Order By: Relevance
“…In this section, we compare LPLC-HDL with the representative dictionary learning methods including D-KSVD [5], LC-KSVD [6], LCLE-DL [7], FDDL [13], DL-COPAR [16] and CLSDDL [19] on the Yale face dataset [29], the Extended YaleB face dataset [30], the Labeled Faces in the Wild (LFW) dataset [31] for face recognition, the Caltech-101 object dataset [23] and the Oxford 102 Flowers dataset [32] for object classification and flower classification, respectively. Moreover, we further compare it with the Sparse Representation based Classification (SRC) [33], the Collaborative Representation based Classifier (CRC) [34], the Probabilistic CRC (ProCRC) [35], the Sparsity Augmented Collaborative Representation (SACR) [36] and some other state-of-the-art methods on the particular datasets.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…In this section, we compare LPLC-HDL with the representative dictionary learning methods including D-KSVD [5], LC-KSVD [6], LCLE-DL [7], FDDL [13], DL-COPAR [16] and CLSDDL [19] on the Yale face dataset [29], the Extended YaleB face dataset [30], the Labeled Faces in the Wild (LFW) dataset [31] for face recognition, the Caltech-101 object dataset [23] and the Oxford 102 Flowers dataset [32] for object classification and flower classification, respectively. Moreover, we further compare it with the Sparse Representation based Classification (SRC) [33], the Collaborative Representation based Classifier (CRC) [34], the Probabilistic CRC (ProCRC) [35], the Sparsity Augmented Collaborative Representation (SACR) [36] and some other state-of-the-art methods on the particular datasets.…”
Section: Methodsmentioning
confidence: 99%
“…In recent years, the hybrid DL [16,19,25,26] has been getting more and more attention in the classification problem. The hybrid dictionary has been shown to perform better than the other types of dictionaries, as it can preserve both the class-specific and common information of the data.…”
Section: The Objective Function Of Hybrid DLmentioning
confidence: 99%
See 2 more Smart Citations
“…Sun et al [27] presented a discriminative group sparse dictionary learning (DGSDL) model which learns a class-specific subdictionary for each class as well as a common sub-dictionary shared by all classes. By introducing a cross-label suppression constraint and group regularization term into the framework of SDL, Wang et al [28] designed a cross-label suppression discriminative DL (CLS-DDL) approach. Lin et al [29] proposed a robust, discriminative and comprehensive dictionary learning (RDCDL) model which learns a class-shared dictionary, class-specific dictionaries and a disturbance dictionary to represent the commonality, particularity and disturbance components in the data.…”
Section: Introductionmentioning
confidence: 99%