2012
DOI: 10.1007/978-3-642-35289-8_17
|View full text |Cite
|
Sign up to set email alerts
|

Transformation Invariance in Pattern Recognition – Tangent Distance and Tangent Propagation

Abstract: In pattern recognition, statistical modeling, or regression, the amount of data is a critical factor affecting the performance. If the amount of data and computational resources are unlimited, even trivial algorithms will converge to the optimal solution. However, in the practical case, given limited data and other resources, satisfactory performance requires sophisticated methods to regularize the problem by introducing a priori knowledge. Invariance of the output with respect to certain transformations of th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
180
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 216 publications
(182 citation statements)
references
References 11 publications
2
180
0
Order By: Relevance
“…Many approaches to invariant pattern recognition are known [2] and TD has been used in a variety of settings, including neural networks and memory based techniques like (k-) nearest neighbor algorithms (k-NN) [3], while in our experiments KD based classifiers obtained better results. A number of solutions have been proposed for efficient implementation of such algorithms, e.g.…”
Section: Introductionmentioning
confidence: 72%
“…Many approaches to invariant pattern recognition are known [2] and TD has been used in a variety of settings, including neural networks and memory based techniques like (k-) nearest neighbor algorithms (k-NN) [3], while in our experiments KD based classifiers obtained better results. A number of solutions have been proposed for efficient implementation of such algorithms, e.g.…”
Section: Introductionmentioning
confidence: 72%
“…It will be seen that with 54-hidden layers, our model achieves a state-of-the-art performance; that is, an error rate of 2.69%, surpassing the conventional ResNet (baseline model). In addition, Fig.3right shows the performance of the best proposed model (54 hidden layer S-ResNet) Invariant vector supports [20] 3.00 Neural network (LetNet) [21] 4.20 Sparse Large Margin Classifiers (SLMC) [22] 4.90 Incrementally Built Dictionary Learning (IBDL-C) [23] 3.99 Neural network + boosting [21] * 2.60 Tangent distance [24] * 2.50 Human performance [24] 2.50 Kernel density + virtual data [25] * 2.40 Kernel density + virtual data + classifier combination [25] * 2.20 Nearest neighbour [25] 5. Table 1.…”
Section: Experiments and Discussionmentioning
confidence: 99%
“…This imposes limitations to their applicability, since many nonpd (npd) kernels arise as similarity measures. For example, in [13][14][15] the authors tried to incorporate invariance or robustness into the measure. Another family of useful npd kernels are the compact support (cs) kernels [16].…”
Section: Introductionmentioning
confidence: 99%