2020
DOI: 10.1016/j.neucom.2019.10.007
|View full text |Cite
|
Sign up to set email alerts
|

Autonomous deep learning: A genetic DCNN designer for image classification

Abstract: Recent years have witnessed the breakthrough success of deep convolutional neural networks (DCNNs) in image classification and other vision applications. Although freeing users from the troublesome handcrafted feature extraction by providing a uniform feature extraction-classification framework, DCNNs still require a handcrafted design of their architectures. In this paper, we propose the genetic DCNN designer, an autonomous learning algorithm can generate a DCNN architecture automatically based on the data av… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
73
0
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 94 publications
(74 citation statements)
references
References 37 publications
0
73
0
1
Order By: Relevance
“…On MNIST dataset, the proposed method achieves an accuracy of 99.54%, is superior to the deep CNN methods [3,4] that include a large number of convolutional layers. The test accuracy of SCNNB is similar to the state-of-the-art deep CNN [25] on MNIST. However, the [25] network consists of 5 × 5 convolution with 419/403 filters in the first / second convolutional layer and 7 × 7 convolution with 288 filters in the third convolutional layer respectively.…”
Section: Resultsmentioning
confidence: 69%
See 1 more Smart Citation
“…On MNIST dataset, the proposed method achieves an accuracy of 99.54%, is superior to the deep CNN methods [3,4] that include a large number of convolutional layers. The test accuracy of SCNNB is similar to the state-of-the-art deep CNN [25] on MNIST. However, the [25] network consists of 5 × 5 convolution with 419/403 filters in the first / second convolutional layer and 7 × 7 convolution with 288 filters in the third convolutional layer respectively.…”
Section: Resultsmentioning
confidence: 69%
“…The test accuracy of SCNNB is similar to the state-of-the-art deep CNN [25] on MNIST. However, the [25] network consists of 5 × 5 convolution with 419/403 filters in the first / second convolutional layer and 7 × 7 convolution with 288 filters in the third convolutional layer respectively. Moreover, the SCNNB network has two 3 × 3 convolutions with 32 and 64 filters, respectively.…”
Section: Resultsmentioning
confidence: 69%
“…On Fashion-MNIST dataset, our SCNND achieves 94.19% accuracy, which is lower than [5], [18], [19]. In [18], the network includes 7 convolutional layers with more filters. The convolutional layers of [5], [19] are 6 times more than the SCNND.…”
Section: Resultsmentioning
confidence: 99%
“…In Table 1, we can clearly see that the SCNND outperforms the methods of [3], [5], achieving high accuracy of 99.60% on MNIST dataset. Compared with the state-of-the-art method of [18] on MNIST, our SCNND includes two 3×3 convolutions with 32 and 64 filters, whereas the model of [18] uses the larger size of convolution (such as 7×7) and more filters (such as 419) to extract features.…”
Section: Experimental Parametersmentioning
confidence: 99%
“…Sun ve arkadaşları[39] GA kullanarak elde ettikleri basit modeller ile karmaşık mimarilere AlexNet mimarisi[42] temel alınarak GA ile optimize edilen bir KSA önerilmiştir. Ma ve arkadaşları[43] tarafından önerilen çalışma bilinen resim tanıma problemleri ile test edildiğinde "state-of-the-art" mimarilere karşı başarılı sonuçlar vermiştir. Assunçao ve arkadaşları da[44] evrimsel algoritmalar ile otomatik olarak oluşturdukları KSA ile bilinen mimarilerle yarışabilir sonuçlar elde etmişlerdir.…”
unclassified