2017
DOI: 10.1109/tip.2017.2700761
|View full text |Cite
|
Sign up to set email alerts
|

Discriminative Nonlinear Analysis Operator Learning: When Cosparse Model Meets Image Classification

Abstract: A linear synthesis model-based dictionary learning framework has achieved remarkable performances in image classification in the last decade. Behaved as a generative feature model, it, however, suffers from some intrinsic deficiencies. In this paper, we propose a novel parametric nonlinear analysis cosparse model (NACM) with which a unique feature vector will be much more efficiently extracted. Additionally, we derive a deep insight to demonstrate that NACM is capable of simultaneously learning the task-adapte… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 11 publications
(9 citation statements)
references
References 55 publications
(94 reference statements)
0
9
0
Order By: Relevance
“…We mainly evaluate our CF-SECL method for pattern classification. The performance of our method is compared with that of some related state-of-the-art methods: KSVD [33], D-KSVD [12], LC-KSVD2 [13], FDDL [7], [8], DPL [9], JEDL [14], SRC [1], DADCL [19], DCADL [16], TL-FC [17], and DNAOL [18]. Our training model has three parameters i.e., α, β, and γ ) to estimate.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…We mainly evaluate our CF-SECL method for pattern classification. The performance of our method is compared with that of some related state-of-the-art methods: KSVD [33], D-KSVD [12], LC-KSVD2 [13], FDDL [7], [8], DPL [9], JEDL [14], SRC [1], DADCL [19], DCADL [16], TL-FC [17], and DNAOL [18]. Our training model has three parameters i.e., α, β, and γ ) to estimate.…”
Section: Resultsmentioning
confidence: 99%
“…In addition, DPL, FDDL learn dictionary for sparse coding of training samples; DSRC and JNPDL jointly learn projection and dictionary for discriminative sparse coding of training samples; while SRC and our method use the training sample as synthesis dictionary for sparse coding of training samples. Similar to D-KSVD [12], LC-KSVD [13], JEDL [14], DCADL [16], TL-FC [17], DNAOL [18], our method applies the learnt linear classifier to the sparse feature to obtain the label of a test sample. However, there are differences in the training models.…”
Section: B Classification-friendly Sparse Encoder and Classifier Leamentioning
confidence: 99%
See 1 more Smart Citation
“…However, with only a simple analysis operator, this model cannot capture the nonlinear structure of data points. Moreover, the linear operator P will not increase the separability of training samples and indices corresponding to zero feature response will be sensitive to slight variations [21]. To tackle this problem, a nonlinear operator f η is introduced.…”
Section: Nonlinear Sparse Modelmentioning
confidence: 99%
“…2) Nonlinear sparse model As addressing optimization of (3) is time-consuming, our method generates a nonlinear co-sparse feature vector f η (PX) [21], where P is a linear feature selector, f η is a nonlinear operator and serves as a nonlinear feature extractor. So, this model performs feature extraction and selection via merely two explicit feed-forward operations, and it remarkably reduces computational complexity than conventional sparse recovery iterative algorithms.…”
Section: Introductionmentioning
confidence: 99%