2019
DOI: 10.3390/pr7100672
|View full text |Cite
|
Sign up to set email alerts
|

Intelligent Energy Management for Plug-in Hybrid Electric Bus with Limited State Space

Abstract: Tabular Q-learning (QL) can be easily implemented into a controller to realize self-learning energy management control of a plug-in hybrid electric bus (PHEB). However, the “curse of dimensionality” problem is difficult to avoid, as the design space is huge. This paper proposes a QL-PMP algorithm (QL and Pontryagin minimum principle (PMP)) to address the problem. The main novelty is that the difference between the feedback SOC (state of charge) and the reference SOC is exclusively designed as state, and then a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
11
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(11 citation statements)
references
References 23 publications
0
11
0
Order By: Relevance
“…Q-learning [87][88][89][90][91], [93] Derive the optimal EMS for FCHEV, aim to improve the FCHEVs' performance SARSA [92] Compare the performance of Q-learning and SARSA in EMS for a FCHEV Q-learning [94,95] Propose an improve Q-learning, embed the recursive algorithm to update the TMP online Q-learning [96,97], [100] Combine the merits of Q-learning, PMP and DP Q-learning [98] Analyzes the impact of algorithm hyperparameters on EMS Policy iteration [99] Calculate the TPM of power demand, apply the EMS in real-time DP [101] Employ the DP in off-line training and ECMS in the on-line application Q-learning [102] Discuss the influence of the number of state variables in the Q-learning algorithm Dyna-H [103] Analyzes the difference between the Dyna-H and Q-learning…”
Section: Algorithms References Content Descriptionmentioning
confidence: 99%
“…Q-learning [87][88][89][90][91], [93] Derive the optimal EMS for FCHEV, aim to improve the FCHEVs' performance SARSA [92] Compare the performance of Q-learning and SARSA in EMS for a FCHEV Q-learning [94,95] Propose an improve Q-learning, embed the recursive algorithm to update the TMP online Q-learning [96,97], [100] Combine the merits of Q-learning, PMP and DP Q-learning [98] Analyzes the impact of algorithm hyperparameters on EMS Policy iteration [99] Calculate the TPM of power demand, apply the EMS in real-time DP [101] Employ the DP in off-line training and ECMS in the on-line application Q-learning [102] Discuss the influence of the number of state variables in the Q-learning algorithm Dyna-H [103] Analyzes the difference between the Dyna-H and Q-learning…”
Section: Algorithms References Content Descriptionmentioning
confidence: 99%
“…So the inputs of the PMP are designed as the shift instruction of the AMT ( sh ( t )) and the throttle of the engine ( th ( t )). Since only the SOC is designed as the state variable in the energy management, a compacted format ( u ( t ) = [ sh ( t ), th ( t )] T ) is defined as the input and is taken as a one‐dimensional control vector . Because the design space constituted by the input is huge, the control vector is sampled by Optimal Latin Hypercube Design (Opt.…”
Section: The Formulation Of the Self‐learning Energy Managementmentioning
confidence: 99%
“…In our previous work, we have simply defined the PMP problem. Here, we detailed the method as follows.…”
Section: The Formulation Of the Self‐learning Energy Managementmentioning
confidence: 99%
See 1 more Smart Citation
“…In particular, in the field of studies concerning energy optimization in the civil construction sector, the following publications have found space in this collection of scientific works: Hardware-in-Loop (HIL) simulation) [11].…”
mentioning
confidence: 99%