2020
DOI: 10.1007/978-3-030-63833-7_40
|View full text |Cite
|
Sign up to set email alerts
|

Improving Self-Organizing Maps with Unsupervised Feature Extraction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
4
1

Relationship

3
6

Authors

Journals

citations
Cited by 12 publications
(10 citation statements)
references
References 26 publications
0
10
0
Order By: Relevance
“…Notice that our model has used the datasets in a raw form, without any feature extractor, which might radically enhance the quality of the vectors representing the data. The addition of feature extractors as input of the neural maps increases the algorithm's accuracy, as we showed in [32], [76].…”
Section: A Simulation Resultsmentioning
confidence: 81%
See 1 more Smart Citation
“…Notice that our model has used the datasets in a raw form, without any feature extractor, which might radically enhance the quality of the vectors representing the data. The addition of feature extractors as input of the neural maps increases the algorithm's accuracy, as we showed in [32], [76].…”
Section: A Simulation Resultsmentioning
confidence: 81%
“…Lastly, it should be noted that the problem addressed by our framework does not correspond to a typical SSL problem, since the existing labels are available (or provided) only after the training. So we consider a special type of SSL problem, called in the literature the Post-Labeled Unsupervised Learning problem [32], [76].…”
Section: Application Of Resom To Classification Tasksmentioning
confidence: 99%
“…Shallow SubTab is trained and evaluated under the same conditions as the deeper ones. As shown in Table 3, shallow SubTab significantly improves results in MNIST and TCGA, placing our model performance on par with CNN-based SOTA models [20,19,25,22,32] as shown in Figure 4b. Obesity is the only dataset which exploits the deeper architecture.…”
Section: Ablation Studymentioning
confidence: 72%
“…To accurately cluster such a dataset, the scattered subclusters must be merged into one cluster. Khacef et al (2019) and Khacef, Rodriguez & Miramond (2020) have accurately clustered MNIST using only six hundred labeled data points. Their proposed method projects input data onto a 2-dimensional feature map using SOM and then labels reference vectors using post-labeled unsupervised learning.…”
Section: Discussionmentioning
confidence: 99%