AIAA Scitech 2020 Forum 2020
DOI: 10.2514/6.2020-1376
|View full text |Cite
|
Sign up to set email alerts
|

Applications of Machine Learning and Monocular Vision for Autonomous On-Orbit Proximity Operations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 8 publications
0
6
0
Order By: Relevance
“…However, it is uniquely difficult to simulate realistic orbital scenes due to the diversity of lighting conditions, vehicle orientations, and image noise present in the space environment. A major takeaway from our prior work on the NASA Seeker mission [9] is that models trained on a sub-par synthetic image dataset, both in terms of fidelity and diversity, can struggle to generalize well to the real environment, a finding in line with other works like [10].…”
Section: Introductionmentioning
confidence: 62%
See 1 more Smart Citation
“…However, it is uniquely difficult to simulate realistic orbital scenes due to the diversity of lighting conditions, vehicle orientations, and image noise present in the space environment. A major takeaway from our prior work on the NASA Seeker mission [9] is that models trained on a sub-par synthetic image dataset, both in terms of fidelity and diversity, can struggle to generalize well to the real environment, a finding in line with other works like [10].…”
Section: Introductionmentioning
confidence: 62%
“…The Seeker Vision system, detailed in [9] and flown on NASA JSC's Seeker 1 Cubesat [11], utilized a convolutional neural network to identify and localize the Cygnus spacecraft in monocular imagery. Development and performance of this model was hindered by both challenges described previously: an inefficient dataset curation, training, and evaluation system along with sub-par synthetic imagery.…”
Section: Motivationmentioning
confidence: 99%
“…Yong et al [16] and Tao et al [17] achieved space object recognition using a multilayer convolutional neural network based on LeNet and AlexNet separately. Dhamani et al [18] selected the MobileNet V1 Single Shot Detector (SSD) architecture for space object detection.…”
Section: Related Workmentioning
confidence: 99%
“…To benchmark the pose estimation model, we use the Intel Joule 570x single-board computer. It has been used previously in a CNN-based visual navigation system onboard a 3U CubeSat, 13 establishing its viability as flight-ready commercial off-the-shelf (COTS) hardware. The computational capabilities of the Joule are severely limited in comparison to typical ground-based hardware such as a laptop or smartphone.…”
Section: Hardware Performancementioning
confidence: 99%
“…In addition, we present a thorough benchmark of the system's performance on an Intel Joule 570x single-board computer, the same type which flew on the NASA Johnson Space Center (JSC) Seeker CubeSat mission in September 2019. 13 We first provide a survey of related work. Then, we describe the details of our pose estimation architecture, as well as our synthetic data generation scheme using the Northrop Grumman Enhanced Cygnus vehicle as our target spacecraft.…”
Section: Introductionmentioning
confidence: 99%