2022
DOI: 10.1016/j.est.2021.103925
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Q-learning network for online simultaneous optimization of energy efficiency and battery life of the battery/ultracapacitor electric vehicle

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0
3

Year Published

2022
2022
2025
2025

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(10 citation statements)
references
References 31 publications
0
7
0
3
Order By: Relevance
“…Advantages Disadvantages [158] Q-learning -8% Reduction of battery capacity loss -Longer battery life and improving range of the vehicle -Capable of handling various drive cycles and measuring noises -Adaptable numerous hybrid power systems -Further state variables must be taken into account -Invalidated for some experimental information from models that were introduced from outside sources [159] Q-learning -Enhanced dynamic efficiency -Decreased fuel use and calculation time -The performance of the acquired power response is similar to that of the current DP-based technique [160] Q-learning -Fuel consumption and power fluctuation were reduced by 5.59% and 13%, respectively -Convergence speed has increased by 69% -Real-time applications are required [164] DQN -Effective use of acquired knowledge -Increased computational speed -Boosted fuel economy -Self-reliance of prior information in driving cycles…”
Section: References Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Advantages Disadvantages [158] Q-learning -8% Reduction of battery capacity loss -Longer battery life and improving range of the vehicle -Capable of handling various drive cycles and measuring noises -Adaptable numerous hybrid power systems -Further state variables must be taken into account -Invalidated for some experimental information from models that were introduced from outside sources [159] Q-learning -Enhanced dynamic efficiency -Decreased fuel use and calculation time -The performance of the acquired power response is similar to that of the current DP-based technique [160] Q-learning -Fuel consumption and power fluctuation were reduced by 5.59% and 13%, respectively -Convergence speed has increased by 69% -Real-time applications are required [164] DQN -Effective use of acquired knowledge -Increased computational speed -Boosted fuel economy -Self-reliance of prior information in driving cycles…”
Section: References Methodsmentioning
confidence: 99%
“…Ref. [158] aimed to introduce a Q‐learning approach to optimise the supervisory management system of an EV with a mixed charging system of a battery and UC. To allocate two control levels, a hierarchical Q‐learning network composed of two independent Q tables was developed.…”
Section: Energy Management Strategies For Hevsmentioning
confidence: 99%
“…The easiest way to model an analogous circuit for an ultracapacitor is with a series resistor and capacitor (RC) circuit. This model comprises a capacitor that represents the charge capacity during the charging and discharging of the ultracapacitor and a resistor that represents the internal resistance [23]. The output current and voltage from the ultracapacitor are calculated as follows;…”
Section: Modelling Of Ultra-capacitormentioning
confidence: 99%
“…The objective function is to minimize the charging cost is find using equation (23). The charging cost of each EV is calculated using below equation;…”
Section: Initializationmentioning
confidence: 99%
“…However, the stochastic dynamic programming algorithm approach yields a control that depends on a particular state [12][13][14], while the deterministic dynamic algorithm solves the optimization problem by sequentially calculating each state at each time step in a backward order. This work makes two key advances compared to previous approaches.…”
Section: Literature Reviewmentioning
confidence: 99%