2019
DOI: 10.1007/978-3-030-11009-3_46
|View full text |Cite
|
Sign up to set email alerts
|

RPNet: An End-to-End Network for Relative Camera Pose Estimation

Abstract: This paper addresses the task of relative camera pose estimation from raw image pixels, by means of deep neural networks. The proposed RPNet network takes pairs of images as input and directly infers the relative poses, without the need of camera intrinsic/extrinsic. While state-of-the-art systems based on SIFT + RANSAC, are able to recover the translation vector only up to scale, RPNet is trained to produce the full translation vector, in an end-to-end way. Experimental results on the Cambridge Landmark data … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 36 publications
(20 citation statements)
references
References 25 publications
(35 reference statements)
0
20
0
Order By: Relevance
“…The result shows that the change interval of β in the outdoor scene is between 250 and 2000. Using cross-validation, RPNet [7] found the most suitable hyperparameter β value in different locations, and spends lots of time clustering the original dataset and testing the trained model for the evaluation. For RCPNet, we use automatic weights that scale on the loss function based on homoscedastic uncertainty (as in [55]) across all the locations, which is numerically more stable than β.…”
Section: Learning Relative Translation and Rotation Simultaneouslymentioning
confidence: 99%
See 4 more Smart Citations
“…The result shows that the change interval of β in the outdoor scene is between 250 and 2000. Using cross-validation, RPNet [7] found the most suitable hyperparameter β value in different locations, and spends lots of time clustering the original dataset and testing the trained model for the evaluation. For RCPNet, we use automatic weights that scale on the loss function based on homoscedastic uncertainty (as in [55]) across all the locations, which is numerically more stable than β.…”
Section: Learning Relative Translation and Rotation Simultaneouslymentioning
confidence: 99%
“…Different from RPNet [7] and PoseNet [11] based on GoogLeNet, we use two branches of pre-trained ResNet34 networks [57] to construct a weight-sharing Siamese network [56]. The 6DoF relative camera pose is estimated end-to-end.…”
Section: Architecture Of Rcpnetmentioning
confidence: 99%
See 3 more Smart Citations