Hard time constraints in space missions bring in the problem of fast video processing for numerous autonomous tasks. Video processing involves the separation of distinct image frames, fetching image descriptors, applying different machine learning algorithms for object detection, obstacle avoidance, and many more tasks involved in the automatic maneuvering of a spacecraft. These tasks require the most informative descriptions of an image within the time constraints. Tracking these informative points from consecutive image frames is needed in flow estimation applications. Classical algorithms like SIFT and SURF are the milestones in the feature description development. But computational complexity and high time requirements force the critical missions to avoid these techniques to get adopted in real-time processing. Hence a time conservative and less complex pre-trained Convolutional Neural Network (CNN) model is chosen in this paper as a feature descriptor. 7-layer CNN model is designed and implemented with pre-trained VGG model parameters and then these CNN features are used to match the points of interests from consecutive image frames of a lunar descent video. The performance of the system is evaluated based on visual and empirical keypoints matching. The scores of matches between two consecutive images from the video using CNN features are then compared with state-of-the-art algorithms like SIFT and SURF. The results show that CNN features are more reliable and robust in case of time-critical video processing tasks for keypoint tracking applications of space missions.
Recently, space research advancements have widened the scope of many vision-based techniques. Computer vision techniques with manifold objectives require that valuable features are extracted from input data. This paper attempts to analyze known feature extraction techniques empirically; Scale Invariant Feature Transform (SIFT), Speeded up robust features (SURF), Oriented fast and Rotated Brief (ORB), and Convolutional Neural Network (CNN). A methodology for autonomously extracting features using CNN is analyzed in more detail. The autonomous process demonstrates the use of convolutional neural networks for feature extraction. Those techniques are studied and evaluated empirically on lunar satellite images. For analysis, a dataset containing different affine transformations of a video frame is generated from a sample lunar descent video. The nearest neighbor algorithm is then applied for feature matching. For an unbiased evaluation, a similar process of feature matching is repeated for all the models. Well-known metrics like repeatability and matching scores are employed to validate the studied techniques. The results show that the CNN features showed much better computational efficiency and stable performance concerning matching accuracy for lunar images than other studied algorithms.
Smart instruments, sensors, and AI technologies are playing an important role in many fields such as medical science, Earth science, astronomy physics, and space study. This article attempts to study the role of sensors, instruments, and AI (artificial intelligence) based smart technologies in lunar missions during navigation of trajectories. Lunar landing missions usually divide the power descent phase into three to four sub-phases. Each sub-phase has its own set of initial and final constraints for the desired system state. The landing systems depend on human competencies for making the most crucial landing decisions. Trajectory planning and designing are very significant in lunar missions, and it requires inputs with precision. The manual systems may be prone to errors. In contrast, AI and smart sensor-based measurements give an accurate idea about the trajectory paths and make appropriate decisions where manual systems may turn into disasters. The manual systems are either pre-fed or have manual controls to guide the trajectory. For autonomous landing problems, trajectory design is a very crucial task. The automated trajectories play a vital role in the measurement and prediction of landing state parameters of the space rocket. Nowadays, sensors, intelligent instruments, and the latest technologies go hand in hand to devise measurement methods for accurate calculations and make appropriate decisions during landing space rockets at the designated destination. Space missions are very expensive and require huge efforts to design smart systems for navigation trajectories. This paper attempts to design all possible candidates of reference navigation trajectories for autonomous lunar descent by employing 3D non-linear system dynamics with randomly chosen initial state conditions. The generated candidates do not rely on multiple hops and thus exhibit an ability to serve autonomous missions. This research work makes use of smart sensors and AI federated techniques for smartly training the system to serve the ultimate purpose. The trajectories are simulated in an automated simulating environment to perform exhaustive analyses. The results accurately approximate the trajectories analogous to their numerical counterparts and converge to their measured final state estimates. The generation rate of feasible trajectories measures the accuracy of the algorithm. The algorithm’s accuracy is near 0.87 for 100 sec flight time, which is reasonable.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.