2022
DOI: 10.1109/taes.2022.3180271
|View full text |Cite
|
Sign up to set email alerts
|

Spacecraft Proximity Maneuvering and Rendezvous With Collision Avoidance Based on Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 26 publications
(4 citation statements)
references
References 28 publications
0
4
0
Order By: Relevance
“…With the help of the probabilistic programming approach to conjunction assessment and the ability to obtain posterior distributions through Bayesian inference, the model can make more precise predictions and identify the crucial variables and orbital features that increase the likelihood of a collision. [17][18]. According to Broida et al, they develop an autonomous navigation system that can handle any unforeseen situations while maintaining an effective computational load [17].…”
Section: Collision Avoidancementioning
confidence: 99%
See 1 more Smart Citation
“…With the help of the probabilistic programming approach to conjunction assessment and the ability to obtain posterior distributions through Bayesian inference, the model can make more precise predictions and identify the crucial variables and orbital features that increase the likelihood of a collision. [17][18]. According to Broida et al, they develop an autonomous navigation system that can handle any unforeseen situations while maintaining an effective computational load [17].…”
Section: Collision Avoidancementioning
confidence: 99%
“…To be more detailed, Proximal Policy Optimization (PPO) is implemented to create a control policy and its performance is evaluated in a simulated Three-Degree-of-Freedom (3-DoF) dynamics environment. While on the other hand, Qu et al propose a deep deterministic policy gradient (DDPG) algorithm to realize the autonomous spacecraft rendezvous (ASR) [18]. Moreover, they also present a notion based on meta-learning by adjusting the control strategy so that the proposed model can adapt to other similar scenarios efficiently.…”
Section: Collision Avoidancementioning
confidence: 99%
“…Artificial potential functions have also been combined with backstepping [15] and sliding mode control [16] to achieve robust and safe proximity operations. Machine learning techniques have also been considered to facilitate ARPOD missions [17][18][19][20][21][22][23]. Yang et al [17] used model-based reinforcement learning and neural networks to address ARPOD mission requirements and computational constraints.…”
mentioning
confidence: 99%
“…Through simulations and experiments, they demonstrated the viability of such an approach for spacecraft proximity operations with obstacle avoidance. In [23], collision avoidance between a chaser and a target was also achieved by leveraging deep reinforcement learning. Many of these articles showcased successful simulated docking maneuvers with obstacles or collision avoidance.…”
mentioning
confidence: 99%