2019
DOI: 10.1109/access.2019.2924030
|View full text |Cite
|
Sign up to set email alerts
|

Demand Response Management for Industrial Facilities: A Deep Reinforcement Learning Approach

Abstract: As a major consumer of energy, the industrial sector must assume the responsibility for improving energy efficiency and reducing carbon emissions. However, most existing studies on industrial energy management are suffering from modeling complex industrial processes. To address this issue, a model-free demand response (DR) scheme for industrial facilities was developed. In practical terms, we first formulated the Markov decision process (MDP) for industrial DR, which presents the composition of the state, acti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
18
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 56 publications
(18 citation statements)
references
References 40 publications
0
18
0
Order By: Relevance
“…Compared to conventional methods (FIFO, random forest), the deep RL algorithm was able to increase profits and minimise carbon consumption, while optimising lead time and cost. Considering a steel powder manufacturing process, Huang et al (2019) proposed a model-free control design to optimise the energy consumption plan based on current energy costs and individual process components (i.e. atomizer, crusher).…”
Section: Energy Managementmentioning
confidence: 99%
“…Compared to conventional methods (FIFO, random forest), the deep RL algorithm was able to increase profits and minimise carbon consumption, while optimising lead time and cost. Considering a steel powder manufacturing process, Huang et al (2019) proposed a model-free control design to optimise the energy consumption plan based on current energy costs and individual process components (i.e. atomizer, crusher).…”
Section: Energy Managementmentioning
confidence: 99%
“…A concise review of DRL-based smart building energy management methods is summarized in [21]. More recently, various DRL-based BEMS methods have been developed, including the scheduling of the ESS and HVAC in residential buildings based on a deep deterministic policy gradient (DDPG) method [22], the management of utility-scale interruptible loads in a dueling deep Q network [23], actor-critic-based demand response management for industrial facilities [24], and the control of the state of charge of a group of multiple ESSs using the DDPG method [25].…”
Section: Introductionmentioning
confidence: 99%
“…Xia et al proposed a digital twin approach for smart manufacturing, in which a deep Q-neural network (DQN) agent is trained in virtual systems to establish an optimal policy and can drive decision makings for operation in a real-world system [17]. Huang et al proposed an RL-based demand response (DR) scheme for steel power manufacturing, where actorcritic-based deep reinforcement learning (DRL) is utilized for efficient scheduling [18]. The agent reduced energy costs of the manufacturing process with efficient manufacturing schedule through DR. RL-based scheduling frameworks have been applied to not only manufacturing processes but also various plants, such as vinyl acetate monomer plant, circulating fluidized bed plant, coal-fired power plant, nuclear power plant and WWTP [19]- [24].…”
Section: Introductionmentioning
confidence: 99%