2018 International Joint Conference on Neural Networks (IJCNN) 2018
DOI: 10.1109/ijcnn.2018.8489675
|View full text |Cite
|
Sign up to set email alerts
|

A Semi-Supervised Self-Organizing Map for Clustering and Classification

Abstract: There has been an increasing interest in semisupervised learning in the recent years because of the great number of datasets with a large number of unlabeled data but only a few labeled samples. Semi-supervised learning algorithms can work with both types of data, combining them to obtain better performance for both clustering and classification. Also, these datasets commonly have a high number of dimensions. This article presents a new semi-supervised method based on selforganizing maps (SOMs) for clustering … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
22
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 13 publications
(23 citation statements)
references
References 17 publications
0
22
0
1
Order By: Relevance
“…Semi-supervised learning is catching on quickly among developers and researchers, as the numbers of unlabelled data are bigger than labeled data in a dataset and this issue is growing speedily [36,37]. Basically, semisupervised learning is trained on a combination of labeled and unlabeled data.…”
Section: Machine Learning Algorithmmentioning
confidence: 99%
“…Semi-supervised learning is catching on quickly among developers and researchers, as the numbers of unlabelled data are bigger than labeled data in a dataset and this issue is growing speedily [36,37]. Basically, semisupervised learning is trained on a combination of labeled and unlabeled data.…”
Section: Machine Learning Algorithmmentioning
confidence: 99%
“…In ALTSS-SOM, each node j in the map represents a cluster and is associated with four m-dimensional vectors, where m is the number of input dimensions: The first three vectors, c j , ω j , and δ j , are the same as defined in [2], , where c j = {c ji , i = 1, · · ·, m} is the center vector; ω j = {ω ji , i = 1, · · ·, m} is the relevance vector; δ j = {δ ji , i = 1, · · ·, m} is the distance vector that stores moving averages of the observed distance between the input patterns x and the center vector |x − c j (n)| for each dimension. Note, however, that δ in SS-SOM and ALTSS-SOM can be seen as the biased first moment estimate.…”
Section: A Structure Of the Nodesmentioning
confidence: 99%
“…We point out that prototype-based methods have been successfully applied for both tasks. Methods based on Self-Organizing Maps [2], [5], [6] and K-Means [7] can be highlighted as examples, as well as deep learning techniques [8]- [10]. The Self-Organizing Map (SOM) is an unsupervised learning method, frequently applied for clustering, while Learning Vector Quantization (LVQ) [5], its supervised counterpart that shares many similarities, is normally used for classification.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, as mentioned, conventional forms of clustering suffer when dealing with highdimensional spaces. In this sense, SOM-based algorithms have been proposed [8], [15]- [17]. However, most of them do not have any form to explore the benefits of more advanced techniques, even a simple form of mini-batch learning.…”
Section: Introductionmentioning
confidence: 99%