2018 10th International Conference on Wireless Communications and Signal Processing (WCSP) 2018
DOI: 10.1109/wcsp.2018.8555643
|View full text |Cite
|
Sign up to set email alerts
|

Towards Fresh and Low-Latency Content Delivery in Vehicular Networks: An Edge Caching Aspect

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
32
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 63 publications
(32 citation statements)
references
References 15 publications
0
32
0
Order By: Relevance
“…Following the work of [12], there has been increasing attention towards optimizing the AoI in wireless networks, e.g., through queue system optimization [21], multi-hop network implementation [22], and scheduling policies [23]. In addition, the study in [24] also suggests that reducing the AoI can come at the expense of incurring higher service latency. As such, the Cache-Assisted Lazy Update and Delivery (CALUD) scheme is proposed to manage this tradeoff through selecting the appropriate content update frequency.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Following the work of [12], there has been increasing attention towards optimizing the AoI in wireless networks, e.g., through queue system optimization [21], multi-hop network implementation [22], and scheduling policies [23]. In addition, the study in [24] also suggests that reducing the AoI can come at the expense of incurring higher service latency. As such, the Cache-Assisted Lazy Update and Delivery (CALUD) scheme is proposed to manage this tradeoff through selecting the appropriate content update frequency.…”
Section: Related Workmentioning
confidence: 99%
“…During the FL task, there can be more than one instance of model training request initiated by the model owner, e.g., to ensure that the global model is kept up-todate, through model training with updated data. We assume that each instance of request arrival follows the Poisson process [24]. An FL based model training is first initiated through the request of the model owner.…”
Section: System Model and Problem Formulationmentioning
confidence: 99%
See 1 more Smart Citation
“…The work in Reference [51], instead, defines a caching mechanism for RSUs only, based on service latency and content freshness. The authors clarify that the service latency shows how fast users get the contents after raising a request, while the freshness shows if the result is updated or not.…”
Section: Caching Policiesmentioning
confidence: 99%
“…Recently, several works have studied the optimization of AoI in vehicular networks [3]- [5]. In [3] and [4], the network dynamics are assumed to be known and in [5], they are estimated in a centralized manner without any consideration of future AoI. Yet, in a URLLC setting [6], [7], reliably learning and estimating the network dynamics with minimum communication overhead is desirable.…”
Section: Introductionmentioning
confidence: 99%