2020 IEEE 18th International Symposium on Intelligent Systems and Informatics (SISY) 2020
DOI: 10.1109/sisy50555.2020.9217076
|View full text |Cite
|
Sign up to set email alerts
|

Vehicle Control in Highway Traffic by Using Reinforcement Learning and Microscopic Traffic Simulation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 14 publications
0
5
0
Order By: Relevance
“…Szoke et al [138] proposed a policy-based RL method to learn a safe driving policy that drives on a dense highway in the shortest amount of time. The state of the environment is represented by a vector with the agent's properties, lane ID, and surrounding vehicles' properties.…”
Section: Other Assistance Systemsmentioning
confidence: 99%
See 1 more Smart Citation
“…Szoke et al [138] proposed a policy-based RL method to learn a safe driving policy that drives on a dense highway in the shortest amount of time. The state of the environment is represented by a vector with the agent's properties, lane ID, and surrounding vehicles' properties.…”
Section: Other Assistance Systemsmentioning
confidence: 99%
“…The state of the environment is represented by a vector with the agent's properties, lane ID, and surrounding vehicles' properties. SUMO simulator was used in [138] to conduct and evaluate experiments. Experiments demonstrate the capability of the learned agent to control a vehicle in a crowded and constantly changing highway environment without collision or termination events most of the time.…”
Section: Other Assistance Systemsmentioning
confidence: 99%
“…This interface provides the possibility to implement driver models with defined driving behavior. Similar functions are available through a Python interface called Traffic Control Interface (TraCI) in SUMO [20]. The whole DLL driving model is illustrated in Figure 1, which consists of three models.…”
Section: External Driving Modelmentioning
confidence: 99%
“…However, no matter the applied algorithm, the environment, and the target task, none of the mentioned papers guarantee 100% performance. Our previous works also show examples of RL agents built with simple neural networks performing in highway scenarios [16], [17], but the perfect behavior can not be assured in every situation. Further examples of autonomous driving functions solved by RL, such as car-following, lane-keeping, trajectory following, merging, or driving in dense traffic, can be found in the following collection: [18]- [23].…”
Section: Related Workmentioning
confidence: 99%