2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
DOI: 10.1109/cvpr.2016.430
|View full text |Cite
|
Sign up to set email alerts
|

Learning from the Mistakes of Others: Matching Errors in Cross-Dataset Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
8
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 25 publications
0
8
0
Order By: Relevance
“…Although the improvement can be seen as marginal in the linear kernel case, note that all methods are marginally better than a regular SVM, since some interactions are very similar to some others (e.g., walking to, walking away from, walking with), which makes the accurate classification of such tasks very challenging. Adaptive SVM+ is more accurate in 32 out of the 60 interactions, SVM MMD [28] in 19, and the rest are attributed to SVM, SVM+ and Adaptive SVM. When RBF kernels are used, there is a 2.81% relative improvement.…”
Section: Interact Datasetmentioning
confidence: 98%
See 4 more Smart Citations
“…Although the improvement can be seen as marginal in the linear kernel case, note that all methods are marginally better than a regular SVM, since some interactions are very similar to some others (e.g., walking to, walking away from, walking with), which makes the accurate classification of such tasks very challenging. Adaptive SVM+ is more accurate in 32 out of the 60 interactions, SVM MMD [28] in 19, and the rest are attributed to SVM, SVM+ and Adaptive SVM. When RBF kernels are used, there is a 2.81% relative improvement.…”
Section: Interact Datasetmentioning
confidence: 98%
“…Additionally, illustrations in the form of clip art are provided depicting the same 60 fine-grained categories in two different level settings: (i) category-level in which images and illustrations are collected independently, and (ii) instance-level in which 2-3 illustrations of the same interaction category are collected for a given image. We followed the same experimental procedure with the method of Sharmanska and Quadrianto [28] for the instance-level setting. They proposed a framework called SVM MMD to "learn from the mistakes of others" by minimizing the distribution mismatch between errors made in images and in privileged data (i.e., illustrations) using the Maximum Mean Discrepancy (MMD) criterion.…”
Section: Interact Datasetmentioning
confidence: 99%
See 3 more Smart Citations