2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring) 2020
DOI: 10.1109/vtc2020-spring48590.2020.9128853
|View full text |Cite
|
Sign up to set email alerts
|

Multiple Channel Access using Deep Reinforcement Learning for Congested Vehicular Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(8 citation statements)
references
References 17 publications
0
7
0
1
Order By: Relevance
“…In [67], Choe et al proposed a self-adaptive MAC layer algorithm employing DQN with a novel contention information-based state representation to improve the performance of the V2V safety packet broadcast for infrastructure-less congested VANET. They evaluated the algorithm with two criterions: Packet Delivery Ratio (PDR) and end-to-end delay.…”
Section: Collision Managementmentioning
confidence: 99%
See 2 more Smart Citations
“…In [67], Choe et al proposed a self-adaptive MAC layer algorithm employing DQN with a novel contention information-based state representation to improve the performance of the V2V safety packet broadcast for infrastructure-less congested VANET. They evaluated the algorithm with two criterions: Packet Delivery Ratio (PDR) and end-to-end delay.…”
Section: Collision Managementmentioning
confidence: 99%
“…In [71], Q-Learning is used to control the contention window through a hybrid back-off that combines EIED and Linear Increase Linear Decrease (LILD [73]) back-off as vehicles in the network become agents. However, comparing to [67], these works assume single-channel operation of the control channel and did not consider the multi-channel operation of DSRC standard.…”
Section: Collision Managementmentioning
confidence: 99%
See 1 more Smart Citation
“…Um mecanismo para ajuste dinâmico de tempos de backoff no protocolo MAC CSMA/CA em redes IEEE802.11 com aprendizado por reforçoé apresentado em [Amuru et al 2015]. Em [Choe et al 2020] propõe-se um algoritmo adaptativo para um protocolo MAC, em que se usa Deep-Q Network (DQN) para melhorar o desempenho de broadcast em redes veiculares. Em [Edalat and Obraczka 2019] usa-se aprendizado de máquina para ajustar a janela de disputa (CW -Contention Window) do protocolo MAC CSMA/CA em redes IEEE 802.11, obtendo maiores vazões e menores latências.…”
Section: Figura 1 Dinâmica Entre Agente E Ambienteunclassified
“…In addition, the RL and DRL algorithms can now be used to study CW optimization because of the high computing capabilities in modern network devices. Some recent works [16], [21][22][23][24][25] discuss the CW optimization through the effectiveness of Q-learning and deep Q-learning network (DQN) algorithms, thus describing the problem of optimizing the CW value in mobile ad-hoc networks (MANETs), VANETs, and for both LTE-LAA and Wi-Fi networks.…”
Section: Introductionmentioning
confidence: 99%