2021
DOI: 10.1109/taes.2021.3074134
|View full text |Cite
|
Sign up to set email alerts
|

A Reinforcement Learning Approach for Transient Control of Liquid Rocket Engines

Abstract: Nowadays, liquid rocket engines use closed-loop control at most near steady operating conditions. The control of the transient phases is traditionally performed in open-loop due to highly nonlinear system dynamics. This situation is unsatisfactory, in particular for reusable engines. The open-loop control system cannot provide optimal engine performance due to external disturbances or the degeneration of engine components over time. In this paper, we study a deep reinforcement learning approach for optimal con… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
2

Relationship

1
9

Authors

Journals

citations
Cited by 17 publications
(7 citation statements)
references
References 32 publications
0
7
0
Order By: Relevance
“…Hardware upgrades are currently being undertaken at the test facility that will allow the implementation of these methods into the engine control algorithm. 38…”
Section: Discussionmentioning
confidence: 99%
“…Hardware upgrades are currently being undertaken at the test facility that will allow the implementation of these methods into the engine control algorithm. 38…”
Section: Discussionmentioning
confidence: 99%
“…It demonstrated that the designed controller can reduce the overshoot of thrust, as well as the pressure and mixture ratio. In the work by Waxenegger-Wilfing et al, 23 a reinforcement learning (RL) approach for the optimal control of the start-up process of a liquid rocket engine was presented. The method can track different steady-state operating points.…”
Section: Introductionmentioning
confidence: 99%
“…Among machine learning approaches, reinforcement learning (RL) has demonstrated unprecedented capabilities in solving decision-making problems [2], which is key to intelligently behave in previously unexplored dynamic environments. Remarkable progress has been registered in developing RL algorithms for various robotic applications including, but not limited to, manipulation [3], navigation [4], [5], tracking [6], path planning [7], and control [8], [9], [10], [11].…”
Section: Introductionmentioning
confidence: 99%