Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
The food industry faces significant challenges in managing operational costs due to its high energy intensity and rising energy prices. Industrial food-processing facilities, with substantial thermal capacities and large demands for cooling and heating, offer promising opportunities for demand response (DR) strategies. This study explores the application of deep reinforcement learning (RL) as an innovative, data-driven approach for DR in the food industry. By leveraging the adaptive, self-learning capabilities of RL, energy costs in the investigated plant are effectively decreased. The RL algorithm was compared with the well-established optimization method Mixed Integer Linear Programming (MILP), and both were benchmarked against a reference scenario without DR. The two optimization strategies demonstrate cost savings of 17.57% and 18.65% for RL and MILP, respectively. Although RL is slightly less efficient in cost reduction, it significantly outperforms in computational speed, being approximately 20 times faster. During operation, RL only needs 2ms per optimization compared to 19s for MILP, making it a promising optimization tool for edge computing. Moreover, while MILP’s computation time increases considerably with the number of binary variables, RL efficiently learns dynamic system behavior and scales to more complex systems without significant performance degradation. These results highlight that deep RL, when applied to DR, offers substantial cost savings and computational efficiency, with broad applicability to energy management in various applications.
The food industry faces significant challenges in managing operational costs due to its high energy intensity and rising energy prices. Industrial food-processing facilities, with substantial thermal capacities and large demands for cooling and heating, offer promising opportunities for demand response (DR) strategies. This study explores the application of deep reinforcement learning (RL) as an innovative, data-driven approach for DR in the food industry. By leveraging the adaptive, self-learning capabilities of RL, energy costs in the investigated plant are effectively decreased. The RL algorithm was compared with the well-established optimization method Mixed Integer Linear Programming (MILP), and both were benchmarked against a reference scenario without DR. The two optimization strategies demonstrate cost savings of 17.57% and 18.65% for RL and MILP, respectively. Although RL is slightly less efficient in cost reduction, it significantly outperforms in computational speed, being approximately 20 times faster. During operation, RL only needs 2ms per optimization compared to 19s for MILP, making it a promising optimization tool for edge computing. Moreover, while MILP’s computation time increases considerably with the number of binary variables, RL efficiently learns dynamic system behavior and scales to more complex systems without significant performance degradation. These results highlight that deep RL, when applied to DR, offers substantial cost savings and computational efficiency, with broad applicability to energy management in various applications.
EU regulations get stricter from 2028 on by imposing net-zero energy building (NZEB) standards on new residential buildings including on-site renewable energy integration. Heat pumps (HP) using thermal building mass, and Model Predictive Control (MPC) provide a viable solution to this problem. However, the MPC potential in NZEBs considering the impact on indoor comfort have not yet been investigated comprehensively. Therefore, we present a co-simulative approach combining MPC optimization and IDA ICE building simulation. The demand response (DR) potential of a ground-source HP and the long-term indoor comfort in an NZEB located in Vorarlberg, Austria over a one year period are investigated. Optimization is performed using Mixed-Integer Linear Programming (MILP) based on a simplified RC model. The HP in the building simulation is controlled by power signals obtained from the optimization. The investigation shows reductions in electricity costs of up to 49% for the HP and up to 5% for the building, as well as increases in PV self-consumption and the self-sufficiency ratio by up to 4% pt., respectively, in two distinct optimization scenarios. Consequently, the grid consumption decreased by up to 5%. Moreover, compared to the reference PI controller, the MPC scenarios enhanced indoor comfort by reducing room temperature fluctuations and lowering the average percentage of people dissatisfied by 1% pt., resulting in more stable indoor conditions. Especially precooling strategies mitigated overheating risks in summer and ensured indoor comfort according to EN 16798-1 class II standards.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.