Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Modern electrical systems are evolving with data communication networks, ushering in upgraded electrical infrastructures and enabling bidirectional communication between utility grids and consumers. The selection of communication technologies is crucial, where wireless communications have emerged as one of the main benefactor technologies due to its cost-effectiveness, scalability, and ease of deployment. Ensuring the streamlined transmission of diverse applications between residential users and utility control centers is crucial for the effective data delivery in smart grids. This paper proposes a congestion control mechanism tailored to smart grid applications using unreliable transport protocols such as UDP, which, unlike TCP, lacks inherent congestion control, presenting a significant challenge to the performance of the network. In this article, we have exploited a reinforcement learning (RL) algorithm and a deep Q-neural network (DQN), to manage congestion control under a UDP environment. In this particular instance, the DQN model learns on its own from interactions with the environment without the need to generate a dataset, rendering it apt for intricate and dynamic scenarios within smart grid communications. Our evaluation covers two scenarios: (i) a grid-like configuration, and (ii) urban scenarios considering the deployment of smart meters in the cities of Montreal, Berlin and Beijing. These evaluations provide a comprehensive examination of the proposed DQN-based congestion control approach under different conditions, showing its effectiveness and adaptability. Conducting a comprehensive performance assessment in both scenarios leads to improvements in metrics such as packet delivery ratio, network throughput, fairness between different traffic sources, packet network transit time, and QoS provision.INDEX TERMS Reinforcement learning, deep Q-networks, wireless communications, multi-hop networks, congestion control. I. INTRODUCTIONIn recent times, the modernization of electrical infrastructures has gone hand in hand with advances in data communication networks, ushering in a new era of modern electrical systems. This combination not only modernizes traditional electrical infrastructures but also establishes a two-way com-munication channel between utility grids and consumers. The core of this transformation lies in the interaction between power sources, transmission lines, control devices, sensors, monitoring systems, and communication technologies. Wireless communication technologies have become a key part of the transformation to smart grids. They are known to be cost-
Modern electrical systems are evolving with data communication networks, ushering in upgraded electrical infrastructures and enabling bidirectional communication between utility grids and consumers. The selection of communication technologies is crucial, where wireless communications have emerged as one of the main benefactor technologies due to its cost-effectiveness, scalability, and ease of deployment. Ensuring the streamlined transmission of diverse applications between residential users and utility control centers is crucial for the effective data delivery in smart grids. This paper proposes a congestion control mechanism tailored to smart grid applications using unreliable transport protocols such as UDP, which, unlike TCP, lacks inherent congestion control, presenting a significant challenge to the performance of the network. In this article, we have exploited a reinforcement learning (RL) algorithm and a deep Q-neural network (DQN), to manage congestion control under a UDP environment. In this particular instance, the DQN model learns on its own from interactions with the environment without the need to generate a dataset, rendering it apt for intricate and dynamic scenarios within smart grid communications. Our evaluation covers two scenarios: (i) a grid-like configuration, and (ii) urban scenarios considering the deployment of smart meters in the cities of Montreal, Berlin and Beijing. These evaluations provide a comprehensive examination of the proposed DQN-based congestion control approach under different conditions, showing its effectiveness and adaptability. Conducting a comprehensive performance assessment in both scenarios leads to improvements in metrics such as packet delivery ratio, network throughput, fairness between different traffic sources, packet network transit time, and QoS provision.INDEX TERMS Reinforcement learning, deep Q-networks, wireless communications, multi-hop networks, congestion control. I. INTRODUCTIONIn recent times, the modernization of electrical infrastructures has gone hand in hand with advances in data communication networks, ushering in a new era of modern electrical systems. This combination not only modernizes traditional electrical infrastructures but also establishes a two-way com-munication channel between utility grids and consumers. The core of this transformation lies in the interaction between power sources, transmission lines, control devices, sensors, monitoring systems, and communication technologies. Wireless communication technologies have become a key part of the transformation to smart grids. They are known to be cost-
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.