AIAA Scitech 2019 Forum 2019
DOI: 10.2514/6.2019-2005
|View full text |Cite
|
Sign up to set email alerts
|

Robust Features Extraction for On-board Monocular-based Spacecraft Pose Acquisition

Abstract: This paper presents the design, implementation, and validation of a robust feature extraction architecture for real-time on-board monocular vision-based pose initialization of a target spacecraft in application to on-orbit servicing and formation flying. The proposed computer vision algorithm is designed to detect the most significant features of an uncooperative target spacecraft in a sequence of two-dimensional input images that are collected on board the chaser spacecraft. A novel approach based on the fusi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
18
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 29 publications
(18 citation statements)
references
References 18 publications
0
18
0
Order By: Relevance
“…The implementation of CNNs for monocular pose estimation in space has already become an attractive solution in recent years [10][11][12], also thanks to the creation of the Spacecraft PosE Estimation Dataset (SPEED) [11], a database of highly representative synthetic images of PRISMA's TANGO spacecraft made publicly available by Stanford's Space Rendezvous Laboratory (SLAB) applicable to train and test different network architectures. One of the main advantages of CNNs over standard feature-based algorithms for relative pose estimation [3,13,14] is an increase in the robustness under adverse illumination condition, as well as a reduction in the computational complexity. Since the pose accuracies of the first adopted CNNs proved to be lower than the accuracies returned by common pose estimation solvers, especially in the estimation of the relative attitude [10], recent efforts investigated the capability of CNNs to perform keypoint localization prior to the actual pose estimation [15][16][17][18].…”
Section: Introductionmentioning
confidence: 99%
“…The implementation of CNNs for monocular pose estimation in space has already become an attractive solution in recent years [10][11][12], also thanks to the creation of the Spacecraft PosE Estimation Dataset (SPEED) [11], a database of highly representative synthetic images of PRISMA's TANGO spacecraft made publicly available by Stanford's Space Rendezvous Laboratory (SLAB) applicable to train and test different network architectures. One of the main advantages of CNNs over standard feature-based algorithms for relative pose estimation [3,13,14] is an increase in the robustness under adverse illumination condition, as well as a reduction in the computational complexity. Since the pose accuracies of the first adopted CNNs proved to be lower than the accuracies returned by common pose estimation solvers, especially in the estimation of the relative attitude [10], recent efforts investigated the capability of CNNs to perform keypoint localization prior to the actual pose estimation [15][16][17][18].…”
Section: Introductionmentioning
confidence: 99%
“…However, in general, applying only one of these methods directly to real images may not be successful since these methods are indiscriminate towards background and foreground. As suggested in our prior work [59], a combination of different methods should be used.…”
Section: Image Acquisition and Processingmentioning
confidence: 99%
“…In [59], we proposed a new robust feature detection algorithm able to deal with actual space imagery, characterized by variable and unfavorable illumination conditions. The proposed strategy is based on a gradient based filter for background elimination, on the use of three multiple processing streams and on the synthesis of polylines to reduce the number of outliers.…”
Section: Image Acquisition and Processingmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, we hope to obtain high-precision analytical solutions through analytical algorithms. 2) Based on noncooperative space measurement, the transformation of pose is calculated by using pattern matching and 3D point cloud technology [20][21][22][23].…”
Section: Introductionmentioning
confidence: 99%