Proceedings 15th International Conference on Pattern Recognition. ICPR-2000
DOI: 10.1109/icpr.2000.906014
|View full text |Cite
|
Sign up to set email alerts
|

Experiments with an extended tangent distance

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
31
0

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 52 publications
(32 citation statements)
references
References 8 publications
1
31
0
Order By: Relevance
“…It will be seen that with 54-hidden layers, our model achieves a state-of-the-art performance; that is, an error rate of 2.69%, surpassing the conventional ResNet (baseline model). In addition, Fig.3right shows the performance of the best proposed model (54 hidden layer S-ResNet) Invariant vector supports [20] 3.00 Neural network (LetNet) [21] 4.20 Sparse Large Margin Classifiers (SLMC) [22] 4.90 Incrementally Built Dictionary Learning (IBDL-C) [23] 3.99 Neural network + boosting [21] * 2.60 Tangent distance [24] * 2.50 Human performance [24] 2.50 Kernel density + virtual data [25] * 2.40 Kernel density + virtual data + classifier combination [25] * 2.20 Nearest neighbour [25] 5. Table 1.…”
Section: Experiments and Discussionmentioning
confidence: 99%
“…It will be seen that with 54-hidden layers, our model achieves a state-of-the-art performance; that is, an error rate of 2.69%, surpassing the conventional ResNet (baseline model). In addition, Fig.3right shows the performance of the best proposed model (54 hidden layer S-ResNet) Invariant vector supports [20] 3.00 Neural network (LetNet) [21] 4.20 Sparse Large Margin Classifiers (SLMC) [22] 4.90 Incrementally Built Dictionary Learning (IBDL-C) [23] 3.99 Neural network + boosting [21] * 2.60 Tangent distance [24] * 2.50 Human performance [24] 2.50 Kernel density + virtual data [25] * 2.40 Kernel density + virtual data + classifier combination [25] * 2.20 Nearest neighbour [25] 5. Table 1.…”
Section: Experiments and Discussionmentioning
confidence: 99%
“…To address real-world applicability, we tested our approach on classifying the raw US-Postal-Service (USPS) Method Error rate [%] SVM, no invariance [9] 4.0 SVM, VSV-method [9] 3.2 TD + kernel densities [6] 2.4 Human Performance [12] 2.5 by padding out-of-image pixels with zero and performing bilinear interpolation.…”
Section: Classification Of Usps Datamentioning
confidence: 99%
“…The invariant distance computation is then based on minimizing the distance between the sets of transformed samples. Similar formalizations of such distances are widely available (Vasconcelos and Lippman 1998;Simard et al 1998;Keysers et al 2000). In particular this notion of invariant distance covers many specific examples in literature.…”
Section: Invariant Distance Substitution Kernelsmentioning
confidence: 99%
“…If in other cases the set T is composed of exponentially many combinations of transformations, which operate locally on independent parts of the pattern x, efficient computation can be performed by sequentially addressing the different object parts and their local transformations. An example for this is the IDM (Keysers et al 2000) for images, which can be evaluated in complexity growing linear in the number of pixels. If in other cases the assumption of linear representation of the sets T x holds, projection methods can be used to perform the exact minimization over infinitely many transformations very efficiently.…”
Section: Invariant Distance Substitution Kernelsmentioning
confidence: 99%
See 1 more Smart Citation