2016
DOI: 10.1007/s00521-016-2346-0
|View full text |Cite
|
Sign up to set email alerts
|

Constrained self-organizing feature map to preserve feature extraction topology

Abstract: In many classification problems, it is necessary to consider the specific location of an n-dimensional space from which features have been calculated. For example, considering the location of features extracted from specific areas of a two-dimensional space, as an image, could improve the understanding of a scene for a video surveillance system. In the same way, the same features extracted from different locations could mean different actions for a 3D HCI system. In this paper, we present a self-organizing fea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 37 publications
0
2
0
Order By: Relevance
“…Firstly, as a feature extractor, the deep learning method often has a poor effect on data with different feature distributions [18]. Secondly, the highdimensional feature map processing after the combination becomes the key to the effect of change detection, because the high-dimensional feature map is often interrelated [19].…”
Section: Introductionmentioning
confidence: 99%
“…Firstly, as a feature extractor, the deep learning method often has a poor effect on data with different feature distributions [18]. Secondly, the highdimensional feature map processing after the combination becomes the key to the effect of change detection, because the high-dimensional feature map is often interrelated [19].…”
Section: Introductionmentioning
confidence: 99%
“…During the past decade, researchers came out with different methods to improve SOM. For example, DASOM introduces denoising autoencoder to reduce the noise in input space [8]; constrained SOM preserves the topology structure by blocking the input space [9]; robust SSGSOM introduces HQ method into semisupervised growing-SOM to improve network robustness [10]. PLSOM uses adaptive learning rate instead of learning rate reduction through training, which can focus on new patterns [11].…”
Section: Introductionmentioning
confidence: 99%