1989
DOI: 10.1162/neco.1989.1.4.541
|View full text |Cite
|
Sign up to set email alerts
|

Backpropagation Applied to Handwritten Zip Code Recognition

Abstract: The ability of learning networks to generalize can be greatly enhanced by providing constraints from the task domain. This paper demonstrates how such constraints can be integrated into a backpropagation network through the architecture of the network. This approach has been successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service. A single network learns the entire recognition operation, going from the normalized image of the character to the final classificat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

6
5,112
0
101

Year Published

1997
1997
2020
2020

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 10,006 publications
(5,219 citation statements)
references
References 5 publications
6
5,112
0
101
Order By: Relevance
“…19-5.23). This work also introduced the MNIST data set of handwritten digits (LeCun et al, 1989), which over time has become perhaps the most famous benchmark of Machine Learning. CNNs helped to achieve good performance on MNIST (LeCun et al, 1990a) (CAP depth 5) and on fingerprint recognition (Baldi and Chauvin, 1993); similar CNNs were used commercially in the 1990s.…”
Section: : Bp For Convolutional Nns (Cnns Sec 54)mentioning
confidence: 99%
See 1 more Smart Citation
“…19-5.23). This work also introduced the MNIST data set of handwritten digits (LeCun et al, 1989), which over time has become perhaps the most famous benchmark of Machine Learning. CNNs helped to achieve good performance on MNIST (LeCun et al, 1990a) (CAP depth 5) and on fingerprint recognition (Baldi and Chauvin, 1993); similar CNNs were used commercially in the 1990s.…”
Section: : Bp For Convolutional Nns (Cnns Sec 54)mentioning
confidence: 99%
“…5.11) CNNs (Fukushima, 1979;LeCun et al, 1989) Multi-Column GPU-MPCNNs (Ciresan et al, 2011b) are committees (Breiman, 1996;Schapire, 1990;Wolpert, 1992;Hashem and Schmeiser, 1992;Ueda, 2000;Dietterich, 2000a) of GPUMPCNNs with simple democratic output averaging. Several MPCNNs see the same input; their output vectors are used to assign probabilities to the various possible classes.…”
Section: : Mpcnns On Gpu Achieve Superhuman Vision Performancementioning
confidence: 99%
“…We use deep learning techniques [13][14][15][16] to infer model parameters and to optimize algorithm settings. Our training pipeline (Fig.…”
Section: Training Deepbind and Scoring Sequencesmentioning
confidence: 99%
“…In the first layer, these filters are able to detect for instance edges and very simple shapes, but composing a hierarchy of these filters allows for great compositional power to express complex features and is an important reason DNNs have proven to be so successful. As determining these filters by hand is practically impossible DNNs are trained by backpropagation (LeCun et al, 1989), a standard machine learning optimization method based on gradient descent. Given a cost function that determines for an input and an expected output a single error value, backpropagation allows to assign a credit to each single unit in the network to specify how much it contributed to the error.…”
Section: Box 1 Deep Neural Networkmentioning
confidence: 99%