It is shown that a topographic product P, first introduced in nonlinear dynamics, is an appropriate measure of the preservation or violation of neighborhood relations. It is sensitive to large-scale violations of the neighborhood ordering, but does not account for neighborhood ordering distortions caused by varying areal magnification factors. A vanishing value of the topographic product indicates a perfect neighborhood preservation; negative (positive) values indicate a too small (too large) output space dimensionality. In a simple example of maps from a 2D input space onto 1D, 2D, and 3D output spaces, it is demonstrated how the topographic product picks the correct output space dimensionality. In a second example, 19D speech data are mapped onto various output spaces and it is found that a 3D output space (instead of 2D) seems to be optimally suited to the data. This is an agreement with a recent speech recognition experiment on the same data set.
The magnification exponents p occurring in adaptive map formation algorithms like Kohonen's self-organizing feature map deviate for the information theoretically optimal value p = 1 as well as from the values that optimize, e.g., the mean square distortion error ( p = 113 for one-dimensional maps). At the same time, models for categorical perception such as the "perceptual magnet" effect, which are based on topographic maps, require negative magnification exponents IL < 0.We present an extension of the self-organizing feature map algorithm, which utilizes adaptive local learning step sizes to actually control the magnification properties of the map. By change of a single parameter, maps with optimal information transfer, with various minimal reconstruction errors, or with an inverted magnification can be generated. Analytic results on this new algorithm are complemented by numerical simulations.
Neural maps project data from an input space onto a neuron position in a (often lower dimensional) output space grid in a neighborhood preserving way, with neighboring neurons in the output space responding to neighboring data points in the input space. A map-learning algorithm can achieve an optimal neighborhood preservation only, if the output space topology roughly matches the effective structure of the data in the input space. We here present a growth algorithm, called the GSOM or growing self-organizing map, which enhances a widespread map self-organization process, Kohonen's self-organizing feature map (SOFM), by an adaptation of the output space grid during learning. The GSOM restricts the output space structure to the shape of a general hypercubical shape, with the overall dimensionality of the grid and its extensions along the different directions being subject of the adaptation. This constraint meets the demands of many larger information processing systems, of which the neural map can be a part. We apply our GSOM-algorithm to three examples, two of which involve real world data. Using recently developed methods for measuring the degree of neighborhood preservation in neural maps, we find the GSOM-algorithm to produce maps which preserve neighborhoods in a nearly optimal fashion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.