2018 IEEE 87th Vehicular Technology Conference (VTC Spring) 2018
DOI: 10.1109/vtcspring.2018.8417882
|View full text |Cite
|
Sign up to set email alerts
|

Impact of Quantized Side Information on Subchannel Scheduling for Cellular V2X

Abstract: In Release 14, 3GPP completed a first version of cellular vehicle-to-everything (C-V2X) communications wherein two modalities were introduced. One of these schemes, known as mode-3, requires support from eNodeBs in order to realize subchannel scheduling. This paper discusses a graph theoretical approach for semi-persistent scheduling (SPS) in mode-3 harnessing a sensing mechanism whereby vehicles can monitor signal-to-interference-plus-noise ratio (SINR) levels across sidelink subchannels. eNodeBs request such… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 3 publications
0
1
0
Order By: Relevance
“…However, the optimal policies are normally time-dependent in a finite-horizon setting, while DDPG is designed to solve the infinite-horizon or indefinite-horizon problems, where the learned policy is the same for every time step [22]. In order to deal with this problem, we set the target values of DDPG in the last control interval K −1 to be derived by (28) in the way as for the other control intervals, i.e., the sum of the immediate reward and the discounted target Q value of the next state instead of only the immediate reward…”
Section: Mtcc-pc Algorithmmentioning
confidence: 99%
“…However, the optimal policies are normally time-dependent in a finite-horizon setting, while DDPG is designed to solve the infinite-horizon or indefinite-horizon problems, where the learned policy is the same for every time step [22]. In order to deal with this problem, we set the target values of DDPG in the last control interval K −1 to be derived by (28) in the way as for the other control intervals, i.e., the sum of the immediate reward and the discounted target Q value of the next state instead of only the immediate reward…”
Section: Mtcc-pc Algorithmmentioning
confidence: 99%