2018 AIAA Guidance, Navigation, and Control Conference 2018
DOI: 10.2514/6.2018-2100
|View full text |Cite
|
Sign up to set email alerts
|

Multi-View Monocular Pose Estimation for Spacecraft Relative Navigation

Abstract: This paper presents a method of estimating the pose of a non-cooperative target for spacecraft rendezvous applications employing exclusively a monocular camera and a threedimensional model of the target. This model is used to build an offline database of prerendered keyframes with known poses. An online stage solves the model-to-image registration problem by matching two-dimensional point and edge features from the camera to the database. We apply our method to retrieve the motion of the now inoperational sate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 22 publications
(26 citation statements)
references
References 32 publications
0
26
0
Order By: Relevance
“…Lastly, when the target is assumed to be known, an offline training phase can be included in which its appearance is learned and condensed into a database to be matched on-the-go to the features detected during the actual mission in order to solve for the relative pose. This is challenging as the discretisation of a 3D object into a 2D representation warrants a feature matching process that is robust to large rotation, scale, and illumination baselines [15].…”
Section: The Camera As a Navigation Sensormentioning
confidence: 99%
“…Lastly, when the target is assumed to be known, an offline training phase can be included in which its appearance is learned and condensed into a database to be matched on-the-go to the features detected during the actual mission in order to solve for the relative pose. This is challenging as the discretisation of a 3D object into a 2D representation warrants a feature matching process that is robust to large rotation, scale, and illumination baselines [15].…”
Section: The Camera As a Navigation Sensormentioning
confidence: 99%
“…The implementation of CNNs for monocular pose estimation in space has already become an attractive solution in recent years [10][11][12], also thanks to the creation of the Spacecraft PosE Estimation Dataset (SPEED) [11], a database of highly representative synthetic images of PRISMA's TANGO spacecraft made publicly available by Stanford's Space Rendezvous Laboratory (SLAB) applicable to train and test different network architectures. One of the main advantages of CNNs over standard feature-based algorithms for relative pose estimation [3,13,14] is an increase in the robustness under adverse illumination condition, as well as a reduction in the computational complexity. Since the pose accuracies of the first adopted CNNs proved to be lower than the accuracies returned by common pose estimation solvers, especially in the estimation of the relative attitude [10], recent efforts investigated the capability of CNNs to perform keypoint localization prior to the actual pose estimation [15][16][17][18].…”
Section: Introductionmentioning
confidence: 99%
“…Shi et al [29] combine the SIFT and BRIEF methods to extract the target interest points in the image, and the EPnP is used to obtain the initial pose. Rondao et al [30] leverage the FREAK descriptor [31] in combination with the EDLines detector [32] to extract keypoints, corners, and edges to find the correspondence between features, and a EPnP solver is utilized to generate the initial pose. Sharma et al [3] use Weak Gradient Elimination to alleviate the effect of image background on the accuracy of pose estimation and the Sobel operators [33] and the Hough Transform are used to extract the features.…”
Section: Related Workmentioning
confidence: 99%
“…The initial learning rate set to 0.001 and decayed 2 times every 50 epochs. We use K-means to determine the width and height of six anchor priors (width, height): (30,47), (42,88), (55,59), (73, 105), (103, 169), (172, 254). We construct the proposed model based on the Pytorch architecture.…”
Section: Training Detailsmentioning
confidence: 99%