2017 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR) 2017
DOI: 10.1109/icwapr.2017.8076680
|View full text |Cite
|
Sign up to set email alerts
|

Improving the lenet with batch normalization and online hard example mining for digits recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 4 publications
0
2
0
1
Order By: Relevance
“…Some authors presented a modified version of the LeNet network, for example, Lin et al [53] used smaller convolutional kernels (3 × 3) to increase the number of extracted features and reduced the fully connected layer from 10 units to 2 units. Xie et al [54] further improved LeNet by adding activation layers, batch normalisation layers, and online hard example mining. Li et al [55] increased the number of convolutions kernels at some of the layers, adopted the ReLu activation function, used max-pooling instead of mean-pooling layers, and used SVM at the output layer.…”
Section: Techniquesmentioning
confidence: 99%
“…Some authors presented a modified version of the LeNet network, for example, Lin et al [53] used smaller convolutional kernels (3 × 3) to increase the number of extracted features and reduced the fully connected layer from 10 units to 2 units. Xie et al [54] further improved LeNet by adding activation layers, batch normalisation layers, and online hard example mining. Li et al [55] increased the number of convolutions kernels at some of the layers, adopted the ReLu activation function, used max-pooling instead of mean-pooling layers, and used SVM at the output layer.…”
Section: Techniquesmentioning
confidence: 99%
“…This was done so that the final DNN could be generally applied to different power systems under different conditions (voltage level, source impedance, etc.) [27]. For normalization, every value in the input data was divided by the maximum absolute value and the same procedure was applied to the labels.…”
Section: Pre-processingmentioning
confidence: 99%
“…Dataset dalam penelitian ini, sama dengan yang digunakan oleh penelitian[5]. Arsitektur CNN yang digunakan oleh peneliti adalah 4 conv layer.…”
unclassified