2021
DOI: 10.1109/tie.2020.3005071
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Nonlinear Deep Reinforcement Learning Controller for DC–DC Power Buck Converters

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
32
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 110 publications
(33 citation statements)
references
References 45 publications
0
32
0
1
Order By: Relevance
“…where T sw is periods of excitation waveform, dB dt is slope of flux density varying from time and ∆B is flux density variation within one switching period. k i is defined as (9).…”
Section: B Magnetic Loss Model 1) Inductor Design Methodmentioning
confidence: 99%
See 1 more Smart Citation
“…where T sw is periods of excitation waveform, dB dt is slope of flux density varying from time and ∆B is flux density variation within one switching period. k i is defined as (9).…”
Section: B Magnetic Loss Model 1) Inductor Design Methodmentioning
confidence: 99%
“…As for its application in power electronics, Tang proposed a deep reinforcement learning aided method to optimize the triple phase shift control scheme in order to achieve the high efficiency of a dual active bridge converter [8]. Gheisarnejad proposed a RL-aided controller for DC-DC power buck converters [9]. RL is implemented to reduce the observer estimation error.…”
Section: Introductionmentioning
confidence: 99%
“…Such method of combining deep NN with RL is called deep reinforcement learning (DRL) which has the advantages of overcoming the "dimensional curse", and does not need system identification steps that may be difficult to obtain in practice. Based on these advantages, the DRL-based methods have been applied to the optimization of wind power forecast uncertainty [28], multiscenario emergency controller [29], power electronic controller [30], and EV charging scheduling. Specially, [31] considers the randomness of commuting behavior and the uncertainty of electricity price, and the authors apply a naive datadriven deep Q network (DQN) algorithm to obtain a charging strategy without any model information.…”
Section: Introductionmentioning
confidence: 99%
“…However, the transient behavior of the converter has not been considered and the converter is not well controlled in the severe load and input disturbances. Deep reinforcement learning based controlling technique has been recently employed in several applications [36]- [38]. These researches apply the reinforcement learning on the different strategies such as sliding mode and model predictive for enhancing the transient and steady state behavior of the DC-DC converters.…”
Section: Iintroductionmentioning
confidence: 99%