2012 15th International IEEE Conference on Intelligent Transportation Systems 2012
DOI: 10.1109/itsc.2012.6338837
|View full text |Cite
|
Sign up to set email alerts
|

Application of reinforcement learning with continuous state space to ramp metering in real-world conditions

Abstract: Abstract² In this paper we introduce a new approach to Freeway Ramp Metering (RM) based on Reinforcement Learning (RL) with focus on real-life experiments in a case study in the City of Toronto. Typical RL methods consider discrete state representation that lead to slow convergence in complex problems. Continuous representation of state space has the potential to significantly improve the learning speed and therefore enables tackling large-scale complex problems. A robust approach based on local regression, na… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
22
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 31 publications
(22 citation statements)
references
References 16 publications
0
22
0
Order By: Relevance
“…Jacob and Abdulhai (2010) presented a methodology that used RL to control freeway-corridor traffic, by integrating ramp metering and route guidance via variable message signs (VMS). Davarynejad et al 2011, and Rezaee et al (2012), applied RL to ramp metering. El-Tantawy et al (2013) developed a control strategy that integrated a network of adaptive traffic signal controllers using a multi-agent RL, while Fares and Gomaa (2014) used RL approach to control traffic at an isolated on-ramp.…”
Section: Rl and Its Applicationsmentioning
confidence: 99%
“…Jacob and Abdulhai (2010) presented a methodology that used RL to control freeway-corridor traffic, by integrating ramp metering and route guidance via variable message signs (VMS). Davarynejad et al 2011, and Rezaee et al (2012), applied RL to ramp metering. El-Tantawy et al (2013) developed a control strategy that integrated a network of adaptive traffic signal controllers using a multi-agent RL, while Fares and Gomaa (2014) used RL approach to control traffic at an isolated on-ramp.…”
Section: Rl and Its Applicationsmentioning
confidence: 99%
“…If the algorithm selects higher ε values such as 0.5 and 0.9, it will fail to reach the benchmark line. [7][8][9][10][11][12][13], α is often less than 0.5, γ is between 0.7 and 0.9 and ε is usually around 0.1.…”
Section: Tts T Ts Nmentioning
confidence: 99%
“…After this contribution, some recent studies have also shown the effectiveness of RL for ramp control under different settings and conditions. For instance, coordinated ramp control using RL is considered in [9], continuous state space was analyzed in [10], and indirect RL was tested in [11,12]. Although some efforts have been made to explore the application of RL in the ramp control domain, the issues of how to set the parameters for RL based ramp control strategies and how these settings influence the algorithm performance have not been widely studied.…”
Section: Introductionmentioning
confidence: 99%
“…Reinforcement learning (RL) has been utilised in many transportation fields. For example, it has been used in urban traffic light control [13–15] and ramp metering in highway [16–18]. In air traffic management field, a series of achievements have been made with RL.…”
Section: Introductionmentioning
confidence: 99%