2019
DOI: 10.1103/physrevaccelbeams.22.014601
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement learning based schemes to manage client activities in large distributed control systems

Abstract: Large distributed control systems typically can be modeled by a hierarchical structure with two physical layers: console level computers (CLCs) layer and front end computers (FECs) layer. The control system of the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL) consists of more than 500 FECs, each acting as a server providing services to a large number of clients. Hence the interactions between the server and its clients become crucial to the overall system performance. There are… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 7 publications
0
4
0
Order By: Relevance
“…There are many other examples for the use of RL at accelerator facilities, for example, Refs. [240][241][242][243].…”
Section: Model State Environmentmentioning
confidence: 99%
“…There are many other examples for the use of RL at accelerator facilities, for example, Refs. [240][241][242][243].…”
Section: Model State Environmentmentioning
confidence: 99%
“…Additional studies using MOGA for accelerator optimization were conducted by Li et al (2018) and Neveu et al (2019). Reinforcement learning tools have also been developed to optimize various elements of the accelerator system (Gao et al, 2019;Bruchon et al, 2020;Hirlaender and Bruchon, 2020;Kain et al, 2020;O'Shea, Bruchon, and Gaio, 2020;John et al, 2021). Additionally, there is new research on transferring the RL policy models to a field-programmable gate array to provide a low latency control response time (John et al, 2021).…”
Section: Control Optimizationmentioning
confidence: 99%
“…RL algorithms have been substantially improved in many aspects in the past decades, including balancing exploration and exploitation (Sutton and Barto 2018), search strategies (Lin 2015), learning behaviour (Sutton and Barto 2018), reward evaluation (Gao et al 2019). However, there is lack of application to water resources systems or hydropower systems with a few studies using traditional RL such as Opposition-based learning, Q-learning or fitted Q-iteration (Lee and Labadie 2007;Castelletti et al 2010 and.…”
Section: Introductionmentioning
confidence: 99%
“…However, there is lack of application to water resources systems or hydropower systems with a few studies using traditional RL such as Opposition-based learning, Q-learning or fitted Q-iteration (Lee and Labadie 2007;Castelletti et al 2010 and. Traditional RL uses state decision tables to map the relationship between states and actions (Lin 2015;Gao et al 2019). With an increasing number of state variables, however, the decision table approach as in the traditional RL cannot effectively handle the large number of combinations of states and actions, resulting in the curse of dimensionality problem (Mnih et al 2013;François-Lavet et al 2018).…”
Section: Introductionmentioning
confidence: 99%