2022
DOI: 10.1016/j.arcontrol.2022.07.004
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement learning in spacecraft control applications: Advances, prospects, and challenges

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 35 publications
(9 citation statements)
references
References 81 publications
0
9
0
Order By: Relevance
“…The terminal velocity and altitude are set as V f = 3.2 Ma and h f À R 0 = 23 km, respectively. The initial heading angle ψ 0 is calculated as equation (15). The rest parameters intervals of initial states of the vehicle and no-fly zone position are summarized in Table 1.…”
Section: Reference Command Generation Using Supervised Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…The terminal velocity and altitude are set as V f = 3.2 Ma and h f À R 0 = 23 km, respectively. The initial heading angle ψ 0 is calculated as equation (15). The rest parameters intervals of initial states of the vehicle and no-fly zone position are summarized in Table 1.…”
Section: Reference Command Generation Using Supervised Learningmentioning
confidence: 99%
“…RL has been widely applied in various aerospace control fields, such as planetary landing, orbit transfer, attitude control, rendezvous and docking, and constellation orbital control, etc. 15 From existing research, deep reinforcement learning is utilized to solve powered descent and landing control of Mars or the Moon. [16][17][18] Stateof-the-art RL algorithms, Proximal Policy Optimization (PPO) and Twin Delayed Deep Deterministic Policy Gradient (TD3PG), are applied to obstacles avoidance of unmanned aerial vehicles (UAVs) 19 and missile, 20 respectively.…”
Section: Introductionmentioning
confidence: 99%
“…Incorporating smart NCSs (SNCSs) into distributed satellite systems has been demonstrated to improve attitude control and reliability in communication. The smart approach to satellite attitude control is not new, [141][142][143] and comparatively more developed than its NCS counterparts. The dual design of control and network routing in these systems has increased cooperation capacity in simulations and robustness to network-induced errors.…”
Section: Advancements In Smart Ncs For Satellitesmentioning
confidence: 99%
“…Therefore, machine learning-based EMSs have received extensive attention in recent years, with reinforcement learning (RL) and deep reinforcement learning (DRL) being the most widely studied. RL's main idea is to train a fully autonomous agent by interacting directly with its potential environment [31]. It is different from supervised and unsupervised machine learning which need static data during the training process.…”
Section: Literature Reviewmentioning
confidence: 99%