2020
DOI: 10.1016/j.actaastro.2020.07.034
|View full text |Cite
|
Sign up to set email alerts
|

DeepLO: Multi-projection deep LIDAR odometry for space orbital robotics rendezvous relative navigation

Abstract: This work proposes a new Light Detection and Ranging (LIDAR) based navigation architecture that is appropriate for uncooperative relative robotic space navigation applications. In contrast to current solutions that exploit 3D LIDAR data, our architecture suggests a Deep Recurrent Convolutional Neural Network (DRCNN) that exploits multi-projected imagery of the acquired 3D LIDAR data. Advantages of the proposed DRCNN are; an effective feature representation facilitated by the Convolutional Neural Network module… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 18 publications
(7 citation statements)
references
References 53 publications
0
7
0
Order By: Relevance
“…Contrary to the above examples, ground-based applications have recently adopted the use of RNNs combined with features extracted by CNN front-ends to model the intrinsic motion dynamics from sequences of imaging data rather than individual inputs [39,40]; more specifically, these proposed LSTM-based [41] DRCNNs for visual odometry (VO) to estimate a car's egomotion. Kechagias-Stamatis et al [42] introduced DeepLO, which followed the same philosophy for lidar-based relative navigation with a non-cooperative space target. Lidar data was preprocessed by quantisation and projection onto each plane in the target body frame of reference, thus creating three 2D depth images to be processed by a regular CNN.…”
Section: B Learning-based Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Contrary to the above examples, ground-based applications have recently adopted the use of RNNs combined with features extracted by CNN front-ends to model the intrinsic motion dynamics from sequences of imaging data rather than individual inputs [39,40]; more specifically, these proposed LSTM-based [41] DRCNNs for visual odometry (VO) to estimate a car's egomotion. Kechagias-Stamatis et al [42] introduced DeepLO, which followed the same philosophy for lidar-based relative navigation with a non-cooperative space target. Lidar data was preprocessed by quantisation and projection onto each plane in the target body frame of reference, thus creating three 2D depth images to be processed by a regular CNN.…”
Section: B Learning-based Methodsmentioning
confidence: 99%
“…Therefore, the main focus here is the investigation of the feasibility of a DRCNN for estimating the pose in rendezvous sequences. The problem has been previously studied by Kechagias-Stamatis et al [42] for VO with lidar map inputs, but not for images. Furthermore, VO is concerned with estimating the motion between two time-consecutive images, but during an RV a single acquired image contains enough information relating F t to F c .…”
Section: A System Architecturementioning
confidence: 99%
“…Develop from popular networks AlexNet [6], [23], [21] Faster R-CNN [7] ResNet [35], [22] GoogLeNet [41] VGG [18] Propose new networks [22], [32], [40], [42] U-Net, YOLOv3, ResNet [22] Pose estimate by PnP…”
Section: Indirect Dnn Methods Combining Optimisermentioning
confidence: 99%
“…Instead of estimating poses at individual timesteps, Kechagias-Stamatis et al [42] propose a DRCNN to regress the relative pose of spacecraft from frame to frame. For a relative spacecraft navigation system, these chained poses serve as continuous outputs, of which the continuity is vital to autonomous missions such as rendezvous and formation flyover.…”
Section: Direct Framework For Spacecraft Relative Pose Estimationmentioning
confidence: 99%
“…Hence, spurred by the recent advances of deep learning in various domains ranging from object classification [4], [5] to odometry [6], several automatic COVID-19 diagnosis methods have been proposed that exploit CT or X-ray imagery [7], [8]. Current techniques may utilize existing pre-trained deep learning models combined with transfer learning [9], [10], or use custom networks [11]–[13].…”
Section: Introductionmentioning
confidence: 99%