2020
DOI: 10.1109/tcsvt.2019.2946755
|View full text |Cite
|
Sign up to set email alerts
|

A View Synthesis-Based 360° VR Caching System Over MEC-Enabled C-RAN

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
34
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 66 publications
(41 citation statements)
references
References 36 publications
0
34
0
1
Order By: Relevance
“…Decoupling the problem into caching and routing optimizations, Lagrange partial relaxation method is applied to solve the problem. Dai et al [206] propose a synthesis-based VR caching scheme in C-RAN. Synthesis involves combining multiple views (e.g., texture and depth) to generate a multiview 360°video.…”
Section: B Collaborative Video Edge Deliverymentioning
confidence: 99%
“…Decoupling the problem into caching and routing optimizations, Lagrange partial relaxation method is applied to solve the problem. Dai et al [206] propose a synthesis-based VR caching scheme in C-RAN. Synthesis involves combining multiple views (e.g., texture and depth) to generate a multiview 360°video.…”
Section: B Collaborative Video Edge Deliverymentioning
confidence: 99%
“…In [14], a Device-to-Device (D2D)assisted VR video distribution system and a pre-caching algorithm based on QoE gain were proposed. A view synthesisbased 360 • VR caching system was designed [15], where a hierarchical collaborative caching problem was formulated to minimize the transmission latency. To further improve the QoE of VR video services, [16]- [25] envisioned the joint computing-caching capabilities of edge network as the key enablers to obtain more potential gains.…”
Section: Related Workmentioning
confidence: 99%
“…Considering the randomness of user's head rotation, the probability that the VR device requests the viewpoint m ∈ P is denoted as q m , which characterizes how often the viewpoint m is tracked by the VR users as navigating the panoramic scene. According to [15], [28], we assume the viewpoint popularity as the uniform distribution, i.e., q m = 1 M . Besides, the sizes of 2D FOV and 3D FOV files of each viewpoint are denoted as S andS (bit), respectively.…”
Section: A User-centric Network Architecturementioning
confidence: 99%
“…However, these approaches are approaching their performance limit and sometimes they are too expensive to implement in practice. The rapid growth of video traffic and the emerging new video applications, such as Augmented reality (AR) and Virtual Reality (VR) [2], [3], bring about great challenges to the existing wireless networks.…”
Section: Introductionmentioning
confidence: 99%
“…This joint design approach is highly promising because caching alone may not be able to meet the fast growing demands of emerging video applications in 5G wireless and beyond. For example, in AR [2], [3], [15], video object classification and recognition task has to be performed first and then the videos are delivered to the user. In multi-viewpoint 360 degree interactive video transmission, the viewing-related features have to be analyzed at the edge first, then the video quality and other video transmission related parameters will be determined [16].…”
Section: Introductionmentioning
confidence: 99%