2015 IEEE International Conference on Computer Vision (ICCV) 2015
DOI: 10.1109/iccv.2015.124
|View full text |Cite
|
Sign up to set email alerts
|

Continuous Pose Estimation with a Spatial Ensemble of Fisher Regressors

Abstract: In this paper, we treat the problem of continuous pose estimation for object categories as a regression problem on the basis of only 2D training information. While regression is a natural framework for continuous problems, regression methods so far achieved inferior results with respect to 3D-based and 2D-based classification-and-refinement approaches. This may be attributed to their weakness to high intra-class variability as well as to noisy matching procedures and lack of geometrical constraints.We propose… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
7
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 32 publications
0
7
0
Order By: Relevance
“…In Table II, we present the results from the literature and the results of our three approaches with the same network architecture (our approach 1, our approach 2, and our approach 3). As seen, our approach 1 obtains a relative improvement in the MeanAE of approximately 31.8% with respect to state of the art [40]. Our approach 2 uses only 123 labeled samples with the regression loss and 1389 structured unlabeled samples with the structure loss but still outperforms the state of the art [40] by 11.6% in MeanAE.…”
Section: Methodsmentioning
confidence: 75%
See 1 more Smart Citation
“…In Table II, we present the results from the literature and the results of our three approaches with the same network architecture (our approach 1, our approach 2, and our approach 3). As seen, our approach 1 obtains a relative improvement in the MeanAE of approximately 31.8% with respect to state of the art [40]. Our approach 2 uses only 123 labeled samples with the regression loss and 1389 structured unlabeled samples with the structure loss but still outperforms the state of the art [40] by 11.6% in MeanAE.…”
Section: Methodsmentioning
confidence: 75%
“…As seen, our approach 1 obtains a relative improvement in the MeanAE of approximately 31.8% with respect to state of the art [40]. Our approach 2 uses only 123 labeled samples with the regression loss and 1389 structured unlabeled samples with the structure loss but still outperforms the state of the art [40] by 11.6% in MeanAE. In addition, our approach 3 using only the 123 labeled samples with the regression loss is not as good as our approach 2, which can prove the effectiveness of the structure loss.…”
Section: Methodsmentioning
confidence: 75%
“…[16] uses probabilistic regression based on Hough Forests with an uncertainty criterion for continuous pose estimation. Similarly, [6] uses regressors on Fisher-encoded vectors extracted from spatial cells. [7] trains a separate SVM-classifier per viewpoint bin using off-the-shelf CNN-features.…”
Section: Related Workmentioning
confidence: 99%
“…The performance of statistical 2D feature based methods from the computer vision research literature inspired the development of most distinctive view-based techniques. Existing techniques [4,6,8] treat viewpoint estimation as a classification problem by dividing the viewpoint range into discrete bins. Ghodrati et al [6] train multiple Support Vector Machine (SVM) classifiers, one for each discrete viewpoint, treating each classifier independently of the others.…”
Section: Related Workmentioning
confidence: 99%
“…He et al [8] use a two-step process, wherein a viewpoint-parametrized classifier is first used to estimate a coarse viewpoint followed by fine-tuning step. Fenzi et al [4] treat continuous viewpoint estimation as a regression problem which is solved using a Radial Basis Function Neural Network (RBF-NN). The RBF-NN is trained to predict the appearance features as a function of the viewpoint.…”
Section: Related Workmentioning
confidence: 99%