Kohonen's self-organizing map (SOM) is used to map high-dimensional data into a low-dimensional representation (typically a 2-D or 3-D space) while preserving their topological characteristics. A major reason for its application is to be able to visualize data while preserving their relation in the high-dimensional input data space as much as possible. Here, we are seeking to go further by incorporating semantic meaning in the low-dimensional representation. In a conventional SOM, the semantic context of the data, such as class labels, does not have any influence on the formation of the map. As an abstraction of neural function, the SOM models bottom-up self-organization but not feedback modulation which is also ubiquitous in the brain. In this paper, we demonstrate a hierarchical neural network, which learns a topographical map that also reflects the semantic context of the data. Our method combines unsupervised, bottom-up topographical map formation with top-down supervised learning. We discuss the mathematical properties of the proposed hierarchical neural network and demonstrate its abilities with empirical experiments.
In this study we want to connect our previously proposed context-relevant topographical maps with the deep learning community. Our architecture is a classifier with hidden layers that are hierarchical two-dimensional topographical maps. These maps differ from the conventional self-organizing maps in that their organizations are influenced by the context of the data labels in a top-down manner. In this way bottom-up and top-down learning are combined in a biologically relevant representational learning setting. Compared to our previous work, we are here specifically elaborating the model in a more challenging setting compared to our previous experiments and to advance more hidden representation layers to bring our discussions into the context of deep representational learning.
While the target article provides a glowing account for the excitement in the field, we stress that hierarchical predictive learning in the brain requires sparseness of the representation. We also question the relation between Bayesian cognitive processes and hierarchical generative models as discussed by the target article.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.