2019 IEEE Winter Conference on Applications of Computer Vision (WACV) 2019
DOI: 10.1109/wacv.2019.00047
|View full text |Cite
|
Sign up to set email alerts
|

Deep Micro-Dictionary Learning and Coding Network

Abstract: In this paper, we propose a novel Deep Micro-Dictionary Learning and Coding Network (DDLCN). DDLCN has most of the standard deep learning layers (pooling, fully, connected, input/output, etc.) but the main difference is that the fundamental convolutional layers are replaced by novel compound dictionary learning and coding layers. The dictionary learning layer learns an over-complete dictionary for the input training data. At the deep coding layer, a locality constraint is added to guarantee that the activated … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
14
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(14 citation statements)
references
References 44 publications
0
14
0
Order By: Relevance
“…MNIST Fashion-MNIST SRC [5] 84.61% 79.86% DKSVD [7] 84.69% 78.38% LC-KSVD1 [1] 84.72% 78.50% LC-KSVD2 [1] 85.88% 79.27% DLSI [9] 88.39% 79.48% FDDL [8] 87.93% 80.67% DPL [10] 90.08% 83.50% LRSDL [19] 87.80% 81.99% ADDL [12] 88.90% 82.10% DDL [22] 98.33% -SCN-4 [6] 97.98% 88.73% DDLCN (100-100) [21] 98.55% -CDPL-Net (no DPL layers) 98.64%…”
Section: Methodsmentioning
confidence: 99%
“…MNIST Fashion-MNIST SRC [5] 84.61% 79.86% DKSVD [7] 84.69% 78.38% LC-KSVD1 [1] 84.72% 78.50% LC-KSVD2 [1] 85.88% 79.27% DLSI [9] 88.39% 79.48% FDDL [8] 87.93% 80.67% DPL [10] 90.08% 83.50% LRSDL [19] 87.80% 81.99% ADDL [12] 88.90% 82.10% DDL [22] 98.33% -SCN-4 [6] 97.98% 88.73% DDLCN (100-100) [21] 98.55% -CDPL-Net (no DPL layers) 98.64%…”
Section: Methodsmentioning
confidence: 99%
“…We mainly evaluate our CDPL-Net for image representation and classification. The performance of CDPL-Net is mainly compared with several traditional DL methods including the sparse representation based classification (SRC) [5], DLSI [9], D-KSVD [7], LC-KSVD [1], FDDL [8], DPL [10], LRSDL [19] and ADDL [12], and four related deep learning models, including deep sparse coding network (SCN) [6], DDL [22], and DDLCN [21]. For image representation and classification on each database, we split it into a training set and a test set.…”
Section: Methodsmentioning
confidence: 99%
“…MNIST Fashion-MNIST SRC [5] 84.61% 79.86% DKSVD [7] 84.69% 78.38% LC-KSVD1 [1] 84.72% 78.50% LC-KSVD2 [1] 85.88% 79.27% DLSI [9] 88.39% 79.48% FDDL [8] 87.93% 80.67% DPL [10] 90.08% 83.50% LRSDL [19] 87.80% 81.99% ADDL [12] 88.90% 82.10% DDL [22] 98.33% -SCN-4 [6] 97.98% 88.73% DDLCN (100-100) [21] 98.55% -CDPL-Net (no DPL layers) 98.64% 87.46% Our CDPL-Net 98.98% 90.69% Figure 7. Accuracy on Fashion-MNIST with different batch sizes.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Part of this work has been published in [25]. The additional contributions are: 1) We present a more detailed analysis by including recently published works about deep dictionary learning.…”
Section: Introductionmentioning
confidence: 99%