48th AIAA/ASME/SAE/ASEE Joint Propulsion Conference &Amp;amp; Exhibit 2012
DOI: 10.2514/6.2012-3873
|View full text |Cite
|
Sign up to set email alerts
|

Progress of the development of an all-electric control system of a rocket engine

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…This failure mode can be averted by smoothing out the Q-function over similar actions. For this, one computes the action that is used to form the Q-learning target in the following way: u (x ) = clip(π(x ; w − ) + clip( , −c, c), x Low , x High ), (9) where ∼ N (0, σ) is noise sampled from a Gaussian process if it is time to update then 10:…”
Section: Reinforcement Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…This failure mode can be averted by smoothing out the Q-function over similar actions. For this, one computes the action that is used to form the Q-learning target in the following way: u (x ) = clip(π(x ; w − ) + clip( , −c, c), x Low , x High ), (9) where ∼ N (0, σ) is noise sampled from a Gaussian process if it is time to update then 10:…”
Section: Reinforcement Learningmentioning
confidence: 99%
“…Although the importance of closed-loop control has been evident for many years, the majority of rocket engines still employ valves which are operated with pneumatic actuators, too inefficient for a sophisticated closed-loop control system. The development of an all-electric control system started in the late 90s in Europe [9]. The future European Prometheus engine will have such a system [10].…”
Section: Introductionmentioning
confidence: 99%
“…In Q-learning, one starts from an arbitrary initial Q-function Q 0 and updates it using observed state transitions and rewards. The update rule is of the following form: (8) where α k ∈ (0, 1] is the learning rate. The term inside the square bracket is nothing else than the difference between the updated estimate of the optimal Q-value of (x k , u k ) and the current estimate Q k (x k , u k ).…”
Section: Reinforcement Learningmentioning
confidence: 99%
“…Although the importance of closed-loop control has been evident for many years, the majority of rocket engines still employ valves which are operated with pneumatic actuators, too inefficient for a sophisticated closed-loop control system. The development of an all-electric control system started in the late 90s in Europe [8]. The future European Prometheus engine will have such a system [9].…”
Section: Introductionmentioning
confidence: 99%
“…However, electrical actuators are gaining importance in valve-position control with respect to traditional pneumatic ones. The main reasons are that auxiliary helium-gas consumption and costs can be reduced and throttling efficiency is improved [59]. That is why the new hardware-in-the-loop (HIL) simulation platform by CNES and ArianeGroup [60] includes real valve-internal electric actuators.…”
Section: Sensors and Actuators Considerationsmentioning
confidence: 99%