2020
DOI: 10.1109/tcsvt.2019.2923007
|View full text |Cite
|
Sign up to set email alerts
|

Joint Subspace Recovery and Enhanced Locality Driven Robust Flexible Discriminative Dictionary Learning

Abstract: We propose a joint subspace recovery and enhanced locality based robust flexible label consistent dictionary learning method called Robust Flexible Discriminative Dictionary Learning (RFDDL). RFDDL mainly improves the data representation and classification abilities by enhancing the robust property to sparse errors and encoding the locality, reconstruction error and label consistency more accurately. First, for the robustness to noise and sparse errors in data and atoms, RFDDL aims at recovering the underlying… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
18
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
1

Relationship

3
5

Authors

Journals

citations
Cited by 33 publications
(18 citation statements)
references
References 42 publications
0
18
0
Order By: Relevance
“…classification error is not incorporated in the objective function). In several variants of discriminative DL methods are proposed to improve the data representation and classification abilities by encoding the locality and reconstruction error into the DL procedures, while some of them aim to concurrently improve the scalability of the algorithms by getting rid of costly norms [26,27,28]. Recently, DL has also been extended to deep learning frameworks [29], which seek multiple dictionaries at different image scales capturing also complementary coherent characteristics.…”
Section: Related Workmentioning
confidence: 99%
“…classification error is not incorporated in the objective function). In several variants of discriminative DL methods are proposed to improve the data representation and classification abilities by encoding the locality and reconstruction error into the DL procedures, while some of them aim to concurrently improve the scalability of the algorithms by getting rid of costly norms [26,27,28]. Recently, DL has also been extended to deep learning frameworks [29], which seek multiple dictionaries at different image scales capturing also complementary coherent characteristics.…”
Section: Related Workmentioning
confidence: 99%
“…ITH the increasing complexity of contents, diversity of distribution and high-dimensionality of real data, how to represent data efficiently for subsequent classification or clustering still remains an important research topic [1][2][3][9] [50]. To represent data, some feasible methods can be used, such as sparse representation (SR) by dictionary learning (DL) [4][5][6][7][8], low-rank coding [9][10][15] [38][39] and matrix factorization [11] [12], which are inspired by the fact that high-dimensional data can usually be characterized by applying a low-dimensional or compressed space in which the possible noise and redundant information can be removed in addition to preserving the useful information and important structures.…”
Section: Introductionmentioning
confidence: 99%
“…Although DPL and ADDL aim to address this issue by calculating a synthesis dictionary jointly, they both did not consider regularizing the synthesis dictionary to obtain the salient low-rank and sparse coefficients. Most real data can usually be represented using a sparse and/or low-rank subspace due to the intrinsic low-dimensional characteristics [4][5][6][7][8][9][10]. Thus, without considering the joint sparse and low-rank constraints properly, the resulted structures of the coefficients may not represent the given data appropriately and accurately.…”
Section: Introductionmentioning
confidence: 99%
“…It is also worth noticing that the dictionary size, i.e., number of atoms, also has direct effect on the complexity of the compact representation of data. Thus, learning a good dictionary with the strong distinguishing power is crucial for the data representation and classification [1][2][3][4][5][6][7][8][9][10][11][12] [41][42][43][44][45].…”
Section: Introductionmentioning
confidence: 99%