2020
DOI: 10.1007/s11042-020-08822-9
|View full text |Cite
|
Sign up to set email alerts
|

A Convolutional Deep Self-Organizing Map Feature extraction for machine learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(9 citation statements)
references
References 33 publications
0
8
0
1
Order By: Relevance
“…The set of all possible SQL queries must be divided into an unknown number of clusters. For this purpose, the device of Kohonen's Self-Organizing Maps (hereinafter SOM) is used [15,16,17]. The SOM training algorithm is developed, based on the use of a rational value of the winning neuron topological neighborhood width, which makes it possible to configure the neural network to prevent its overfitting.…”
Section: Methodsmentioning
confidence: 99%
“…The set of all possible SQL queries must be divided into an unknown number of clusters. For this purpose, the device of Kohonen's Self-Organizing Maps (hereinafter SOM) is used [15,16,17]. The SOM training algorithm is developed, based on the use of a rational value of the winning neuron topological neighborhood width, which makes it possible to configure the neural network to prevent its overfitting.…”
Section: Methodsmentioning
confidence: 99%
“…However, they trained several self-organizing maps for each input subspace and used SOM for data visualization only. In another work, Sakkari and Zaied [20] proposed an architecture which alternate self-organizing, RELU rectification function, and abstraction layer. In their work, the self-organizing layer is composed of a multiple region-based SOMs with each map focuses on modeling a local sub-region of the input image.…”
Section: Related Workmentioning
confidence: 99%
“…It is clear that, using 3D-SOM grid for DCSOM-2 features and 4D-SOM grid for DCSOM-1 features outperform state-of-the-art results in MNIST-rand and MNIST-img datasets, respectively. These 1.2 Deep belief network + linear SVM [41] 1.9 CDBN [42] 0.82 ConvNet [40] 0.53 DCTNet [34] 0.74 ScatNet-2 [12] 0.43 PCANet-2 [11] 0.66 LDANet-2 [11] 0.62 DSOM [23] 3.83 UDSOM [20] 1.06 CSOM (2D) [14] 0.81 SOMNet [13] 0.86 CR-MSOM [15] 0.97 DCSOM-1 (4D) 0.78 DCSOM-2 (4D) 0.57 results prove that features of DCSOM are more robust to noise than other methods when we carefully choose the appropriate dimension of SOM grid. The hard quantization of the SOM mapping highly improve the representation of the noisy patches as compared to other CNN architecture which use either supervised or unsupervised learning.…”
Section: B Experiments Using Mnist Variations Datasetsmentioning
confidence: 99%
See 2 more Smart Citations