Screen resolution along with network conditions are main objective factors impacting the user experience, in particular for video streaming applications. User terminals on their side feature more and more advanced characteristics resulting in different network requirements for good visual experience. Previous studies tried to link MOS (Mean Opinion Score) to video bitrate for different screen types (e.g., Common Intermediate Format (CIF), Quarter Common Intermediate Format (QCIF), and High Definition (HD)). We leverage such studies and formulate a QoE driven resource allocation problem to pinpoint the optimal bandwidth allocation that maximizes the QoE (Quality of Experience) over all users of a network service provider located behind the same bottleneck link, while accounting for the characteristics of the screens they use for video playout. For our optimization problem, QoE functions are built using curve fitting on datasets capturing the relationship between MOS, screen characteristics, and bandwidth requirements. We propose a simple heuristic based on Lagrangian relaxation and KKT (Karush Kuhn Tucker) conditions to efficiently solve the optimization problem. Our numerical simulations show that the proposed heuristic is able to increase overall QoE up to 20% compared to an allocation with a TCP look-alike strategy implementing max-min fairness.
Video streaming is, without a doubt, the most dominant application on the Internet. Each time a video streaming platform (e.g., YouTube, Dailymotion or Netflix) is requested, the browser loads a web page, setups the video player, then retrieves and renders the requested content. The video streaming transmission is based on the dynamic adaptive streaming over HTTP (DASH) which takes into consideration the underlying network conditions (e.g., delay, loss rate and throughput) and the terminal characteristics (viewport) to select the video resolution to request from the server. We question in this work the efficiency of this transmission in taking into account the terminal characteristics, the viewport in particular, knowing that requesting a resolution exceeding the viewport results in waste of bandwidth. Such bandwidth waste can either save money when the user is on a pay as you go data plane, or steal bandwidth from other users who are in need for it to further improve their Quality of Experience (QoE). To narrow the stats, we present a controlled experimental framework that leverages the YouTube and Dailymotion video players and the Chrome web request API to assess the impact of browser viewport on the observed video resolution pattern [1]-[3]. In a first attempt of kind, we use the observed patterns to quantify the amount of wasted bandwidth. Our data-driven analysis points to high sensitivity of the Dailymotion player toward small viewports (240x144 and 400x225) compared to the YouTube player resulting in 15% and 8% less bandwidth waste respectively. However, as the users shift toward large viewports, the YouTube player becomes more viewport friendly compared to the Dailymotion player with shows an estimated bandwidth waste of 28%.
Video streaming is without doubt the most requested Internet service, and main source of pressure on the Internet infrastructure. At the same time, users are no longer satisfied by the Internet's best effort service, instead, they expect a seamless service of high quality from the side of the network. As result, Internet Service Providers (ISP) engineer their traffic so as to improve their end-users' experience and avoid economic losses. Content providers from their side, and to enforce customers privacy, have shifted towards end-to-end encryption (e.g., TLS/SSL). Video streaming relies on the dynamic adaptive streaming over HTTP protocol (DASH) which takes into consideration the underlying network conditions (e.g., delay, loss rate, and throughput) and the viewport capacity (e.g., screen resolution) to improve the experience of the end user in the limit of the available network resources. In this work, we propose an experimental framework able to infer fine-grained video flow information such as chunk sizes from encrypted YouTube video traces. We also present a novel technique to separate video and audio chunks from encrypted traces based on Gaussian Mixture Models (GMM). Then, we leverage our dataset to train models able to predict the class of viewport (either SD or HD) per video session with an average 92% accuracy and 85% F1-score. The prediction of the exact viewport resolution is also possible but shows a lower accuracy than the viewport class.Index Terms-Video streaming, controlled experiments, video chunk size, viewport resolution, YouTube encrypted traces, machine learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.