Cloud gaming enables playing high end games, originally designed for PC or game console setups, on low end devices, such as net-books and smartphones, by offloading graphics rendering to GPU powered cloud servers. However, transmitting the high end graphics requires a large amount of available network bandwidth, even though it is a compressed video stream. Foveated video encoding (FVE) reduces the bandwidth requirement by taking advantage of the non-uniform acuity of human visual system and by knowing where the user is looking. We have designed and implemented a system for cloud gaming with foveated graphics using a consumer grade real-time eye tracker and an open source cloud gaming platform. In this article, we describe the system and its evaluation through measurements with representative games from different genres to understand the effect of parameterization of the FVE scheme on bandwidth requirements and to understand its feasibility from the latency perspective. We also present results from a user study. The results suggest that it is possible to find a "sweet spot" for the encoding parameters so that the users hardly notice the presence of foveated encoding but at the same time the scheme yields most of the bandwidth savings achievable.
Good user experience with interactive cloud-based multimedia applications, such as cloud gaming and cloudbased VR, requires low end-to-end latency and large amounts of downstream network bandwidth at the same time. In this paper, we present a foveated video streaming system for cloud gaming. The system adapts video stream quality by adjusting the encoding parameters on the fly to match the player's gaze position. We conduct measurements with a prototype that we developed for a cloud gaming system in conjunction with eye tracker hardware. Evaluation results suggest that such foveated streaming can reduce bandwidth requirements by even more than 50% depending on parametrization of the foveated video coding and that it is feasible from the latency perspective.
Multi-access edge computing (MEC) enables placing video content at the edge of a mobile network with the aim of reducing data traffic in the backhaul network. Direct deviceto-device (D2D) communication can further alleviate load from the backhaul network. Both MEC and D2D have already been examined by prior work, but their combination applied to adaptive video streaming have not yet been explored in detail.In this paper, we analyze how enabling D2D jointly with edge computing affects the quality of experience (QoE) of video streaming clients and contributes to reducing the backhaul traffic. To this end, we formulate the problem of jointly maximizing the QoE of the clients and minimizing the backhaul traffic and edge processing as an integer non-linear programming (INLP) optimization model and propose a low-complexity algorithm using self-parameterization technique to solve the problem. The main takeaway from simulation results is that enabling D2D with edge computing reduces the backhaul traffic by approximately 18% and edge processing by 30% on average while maintaining roughly the same average video bitrate per client compared to edge computing without D2D. Our results provide a guideline for system designers to judge the effectiveness of enabling D2D into MEC in the next generation of 5G mobile networks.
Remote rendering systems comprise powerful servers that render graphics on behalf of low-end client devices and stream the graphics as compressed video, enabling high end gaming and Virtual Reality on those devices. One key challenge with them is the amount of bandwidth required for streaming high quality video. Humans have spatially non-uniform visual acuity: We have sharp central vision but our ability to discern details rapidly decreases with angular distance from the point of gaze. This phenomenon called foveation can be taken advantage of to reduce the need for bandwidth. In this paper, we study three different methods to produce a foveated video stream of real-time rendered graphics in a remote rendered system: 1) foveated shading as part of the rendering pipeline, 2) foveation as post processing step after rendering and before video encoding, 3) foveated video encoding. We report results from a number of experiments with these methods. They suggest that foveated rendering alone does not help save bandwidth. Instead, the two other methods decrease the resulting video bitrate significantly but they also have different quality per bit and latency profiles, which makes them desirable solutions in slightly different situations. CCS CONCEPTS• Computer systems organization → Real-time system architecture; • Computing methodologies → Image compression; Non-photorealistic rendering; Virtual reality; • Networks → Cloud computing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.