2023
DOI: 10.3390/en16083450
|View full text |Cite
|
Sign up to set email alerts
|

A Review of Reinforcement Learning-Based Powertrain Controllers: Effects of Agent Selection for Mixed-Continuity Control and Reward Formulation

Abstract: One major cost of improving the automotive fuel economy while simultaneously reducing tailpipe emissions is increased powertrain complexity. This complexity has consequently increased the resources (both time and money) needed to develop such powertrains. Powertrain performance is heavily influenced by the quality of the controller/calibration. Since traditional control development processes are becoming resource-intensive, better alternate methods are worth pursuing. Recently, reinforcement learning (RL), a m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 137 publications
(225 reference statements)
0
1
0
Order By: Relevance
“…Thus, the control policy could be improved continuously based on the measurement feedback. Additionally, several other physical nonlinearities and parasitic influences from other drive system components can be adequately captured in the reinforced-based learning environment [30,31]. Although the learning process can be performed offline, the trained RL agent can connect to a control interface in the controller making it suitable for real-time implementation without demanding a drastic change in the embedded hardware [32,33].…”
Section: Introduction and Related Workmentioning
confidence: 99%
“…Thus, the control policy could be improved continuously based on the measurement feedback. Additionally, several other physical nonlinearities and parasitic influences from other drive system components can be adequately captured in the reinforced-based learning environment [30,31]. Although the learning process can be performed offline, the trained RL agent can connect to a control interface in the controller making it suitable for real-time implementation without demanding a drastic change in the embedded hardware [32,33].…”
Section: Introduction and Related Workmentioning
confidence: 99%