AIAA SCITECH 2022 Forum 2022
DOI: 10.2514/6.2022-2078
|View full text |Cite
|
Sign up to set email alerts
|

Soft Actor-Critic Deep Reinforcement Learning for Fault Tolerant Flight Control

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
21
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 15 publications
(23 citation statements)
references
References 18 publications
1
21
1
Order By: Relevance
“…These are DRL techniques that generally use a larger number of hidden layers in the ANNs of the RL architecture to be able to work in more complex environments and deal with large continuous state and action spaces. While the focus of this thesis is on the IDHP architecture applied to flight control, these models are also of interest as they are able to deal with more complex tasks like the direct control of the full 6 degrees of freedom of an aircraft [75]. However, due to the addition of hidden layers, these models require significantly longer training times and may have limited online learning capabilities.…”
Section: Other Model-free Reinforcement Learning Modelsmentioning
confidence: 99%
See 4 more Smart Citations
“…These are DRL techniques that generally use a larger number of hidden layers in the ANNs of the RL architecture to be able to work in more complex environments and deal with large continuous state and action spaces. While the focus of this thesis is on the IDHP architecture applied to flight control, these models are also of interest as they are able to deal with more complex tasks like the direct control of the full 6 degrees of freedom of an aircraft [75]. However, due to the addition of hidden layers, these models require significantly longer training times and may have limited online learning capabilities.…”
Section: Other Model-free Reinforcement Learning Modelsmentioning
confidence: 99%
“…Thanks to the high sample efficiency of SAC, the controller development is much easier, and it has been able to outperform other state-of-the-art model-free algorithms like DDPG, PPO, and TD3. SAC has already been applied to the low-level control of a quadcopter [92], as well as for a large fixed-wing business jet [75]. The latter showed the promising capabilities of this method to develop robust controllers for CS-25 certified aircraft.…”
Section: Soft Actor-criticmentioning
confidence: 99%
See 3 more Smart Citations