1980
DOI: 10.1007/bf00344251
|View full text |Cite
|
Sign up to set email alerts
|

Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position

Abstract: A neural network model for a mechanism of visual pattern recognition is proposed in this paper. The network is self-organized by "learning without a teacher", and acquires an ability to recognize stimulus patterns based on the geometrical similarity (Gestalt) of their shapes without affected by their positions. This network is given a nickname "neocognitron". After completion of self-organization, the network has a structure similar to the hierarchy model of the visual nervous system proposed by Hubel and Wies… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

11
2,281
1
52

Year Published

1999
1999
2015
2015

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 4,627 publications
(2,345 citation statements)
references
References 6 publications
11
2,281
1
52
Order By: Relevance
“…5.9), although his architecture was comparatively deep indeed. For downsampling purposes he used Spatial Averaging (Fukushima, 1980(Fukushima, , 2011 instead of Max-Pooling (MP, Sec. 5.11), currently a particularly convenient and popular WTA mechanism.…”
Section: : Convolutionmentioning
confidence: 99%
See 1 more Smart Citation
“…5.9), although his architecture was comparatively deep indeed. For downsampling purposes he used Spatial Averaging (Fukushima, 1980(Fukushima, , 2011 instead of Max-Pooling (MP, Sec. 5.11), currently a particularly convenient and popular WTA mechanism.…”
Section: : Convolutionmentioning
confidence: 99%
“…5.11) CNNs (Fukushima, 1979;LeCun et al, 1989) Multi-Column GPU-MPCNNs (Ciresan et al, 2011b) are committees (Breiman, 1996;Schapire, 1990;Wolpert, 1992;Hashem and Schmeiser, 1992;Ueda, 2000;Dietterich, 2000a) of GPUMPCNNs with simple democratic output averaging. Several MPCNNs see the same input; their output vectors are used to assign probabilities to the various possible classes.…”
Section: : Mpcnns On Gpu Achieve Superhuman Vision Performancementioning
confidence: 99%
“…Filter responses are then repeatedly pooled and refiltered, resulting in a deep feed-forward network architecture whose output feature vectors are eventually classified. One of the first hierarchical neural systems was the Neocognitron by Fukushima (1980), which inspired many of the more recent variants.…”
Section: Introductionmentioning
confidence: 99%
“…However, for performing visual task, as well as the classification of MCCs, this approach is far from sufficient. Another type of neural network model is the Fukushima's Neocognitrons (Fukushima, 1980(Fukushima, , 1988Fukushima & Miyake, 1982;Fukushima & Wake, 1991), and evolved Tricognitron (Xu, 1993;Xu & Chang, 1996). The Fukushima's Neocognitrons consist of feature extraction, S, layers and fusion, C, layers, where the S layers are constructed either by some pre-selected shape patterns, or by an unsupervised learning to code the object features in the planes.…”
Section: Introductionmentioning
confidence: 99%
“…The Fukushima's Neocognitrons consist of feature extraction, S, layers and fusion, C, layers, where the S layers are constructed either by some pre-selected shape patterns, or by an unsupervised learning to code the object features in the planes. The unsupervised Neocognitrons were considered comprising biological neural model, and effective to the recognition of characters, invariant to size, translation and handwriting (Fukushima, 1980(Fukushima, , 1988Fukushima & Miyake, 1982;Fukushima & Wake, 1991). Due to these characters of possessing biological meaning and effective in character recognition, the Neocognitrons were regarded one type of very unique neural models.…”
Section: Introductionmentioning
confidence: 99%