As space debris has become a cause of concern for space operations around Earth, active debris removal and satellite servicing missions have gained increasing attention. Within this framework, in specific scenarios, the chaser might be asked to operate autonomously in the vicinity of a non-cooperative, unknown target. This paper presents a sampling-based receding-horizon motion planning algorithm that selects inspection maneuvers while taking many complex constraints into account. The proposed guidance solution is compared with classical approaches and it is shown to take advantage of the characteristics of the natural dynamics of the relative motion to outperform them. In addition, the impact of different input sampling exploration strategies is explored to propose a simple and more robust approach based on subset simulation.
Active debris removal and unmanned on-orbit servicing missions have gained interest in the last few years, along with the possibility to perform them through the use of an autonomous chasing spacecraft. In this work, new resources are proposed to aid the implementation of guidance, navigation and control algorithms for satellites devoted to the inspection of non-cooperative targets before any proximity operation is initiated. In particular, the use of Convolutional Neural Networks (CNNs) performing object detection and instance segmentation is proposed and its effectiveness in recognizing components and parts of the target satellite is evaluated. Yet no reliable training images dataset of this kind exists to date. A tailored and publicly available software has been developed to overcome this limitation by generating synthetic images. Computer Aided Design models of existing satellites are loaded on a 3-D animation software and used to programmatically render images of the objects from different point of views and in different lighting conditions, together with the necessary ground truth labels and masks for each image. The results show how a relatively low number of iterations is sufficient for a CNN trained on such datasets to reach a mean average precision value in line with state-of-the-art-performances achieved by CNNs in common datasets. An assessment of the performance of the neural network when trained on different conditions is provided.To conclude, the method is tested on real images from the MEV-1 on-orbit servicing mission, showing that using only artificially generated images to train the model does not compromise the learning process.
Small-bodies such as asteroids and comets display great variability in terms of surface morphological features. These are often unknown beforehand but can be employed for hazard avoidance during landing, autonomous planning of scientific observations, and navigation purposes. Algorithms performing these tasks are often data-driven, which means they require realistic, sizeable, and annotated datasets which in turn may rely heavily on human intervention. This work develops a methodology to generate synthetic, automatically-labeled datasets which are used in conjunction with real, manually-labeled ones to train deep-learning architectures
This paper aims to present a deep learning-based pipeline for estimating the pose of an uncooperative target spacecraft, from a single grayscale monocular image. The possibility of enabling autonomous vision-based relative navigation in close proximity to a non-cooperative Resident Space Object (RSO) would be especially appealing for mission scenarios such as on-orbit servicing and active debris removal. The Relative Pose Estimation Pipeline (RPEP) proposed in this work leverages state-of-the-art Convolutional Neural Network (CNN) architectures to detect the features of the target spacecraft using monocular vision. Specifically, the overall pipeline is composed of three main subsystems. The input image is first of all processed using an object detection CNN that localizes the bounding box enclosing our target. This is followed by a second CNN that regresses the location of semantic keypoints of the spacecraft. Eventually, a geometric optimization algorithm exploits the detected keypoint locations to solve for the final relative pose. The proposed pipeline demonstrated centimeter/degree-level pose accuracy on the Spacecraft PosE Estimation Dataset (SPEED), along with considerable robustness to changes in illumination and background conditions. In addition, the architecture showed to generalize well on real images, despite having exclusively exploited synthetic data from SPEED to train the CNNs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.