2018
DOI: 10.1155/2018/3145947
|View full text |Cite
|
Sign up to set email alerts
|

A New Generalized Deep Learning Framework Combining Sparse Autoencoder and Taguchi Method for Novel Data Classification and Processing

Abstract: Deep autoencoder neural networks have been widely used in several image classification and recognition problems, including hand-writing recognition, medical imaging, and face recognition. The overall performance of deep autoencoder neural networks mainly depends on the number of parameters used, structure of neural networks, and the compatibility of the transfer functions. However, an inappropriate structure design can cause a reduction in the performance of deep autoencoder neural networks. A novel framework,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
23
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 43 publications
(23 citation statements)
references
References 46 publications
0
23
0
Order By: Relevance
“…Stacked sparse auto-encoder is a type of deep neural network involving stacking sparse auto-encoders, and a classifier is regularly used as the final layer for mainly classification or regression problems [18]. This model has not been applied in such a problem which encourages authors to employ the technique into the current problem.…”
Section: Deep Auto-encoders Applied Robocode To Approximate Q-valuesmentioning
confidence: 99%
See 3 more Smart Citations
“…Stacked sparse auto-encoder is a type of deep neural network involving stacking sparse auto-encoders, and a classifier is regularly used as the final layer for mainly classification or regression problems [18]. This model has not been applied in such a problem which encourages authors to employ the technique into the current problem.…”
Section: Deep Auto-encoders Applied Robocode To Approximate Q-valuesmentioning
confidence: 99%
“…Accordingly, the first auto-encoders are trained by utilizing an unsupervised training method [18]. Fundamentally, the output of the first sparse auto-encoder is considered as an input to the second one, and the output of second auto-encoders becomes an input to the classifier as shown in the corresponding figure.…”
Section: Deep Auto-encoders Applied Robocode To Approximate Q-valuesmentioning
confidence: 99%
See 2 more Smart Citations
“…The stacked sparse autoencoder (SSAE) is multiple layers of sparse autoencoders neural network, which is used as an unsupervised feature extraction method, and the Taguchi Method is employed for parameter optimization. This novel framework is tested with different experimental data sets like DDoS Detection, IDS Attack, Epileptic Seizure Recognition, and handwritten digit classification problem (Karim et al, 2018). There was a proposal of deep autoencoder architecture for medical data preprocessing, and for classification, the softmax classifier layer is trained.…”
Section: Introductionmentioning
confidence: 99%