In HTTP Adaptive Streaming (HAS), video content is temporally divided into multiple segments, each encoded at several quality levels. The client can adapt the requested video quality to network changes, generally resulting in a smoother playback. Unfortunately, live streaming solutions still often suffer from playout freezes and a large end-to-end delay. By reducing the segment duration, the client can use a smaller temporal buffer and respond even faster to network changes. However, since segments are requested subsequently, this approach is susceptible to high round-trip times. In this letter, we discuss the merits of an HTTP/2 push-based approach. We present the details of a measurement study on the available bandwidth in real 4G/LTE networks, and analyze the induced bit rate overhead for HEVCencoded video segments with a sub-second duration. Through an extensive evaluation with the generated video content, we show that the proposed approach results in a higher video quality (+7.5%) and a lower freeze time (-50.4%), and allows to reduce the live delay compared to traditional solutions over HTTP/1.1.
HTTP Adaptive Streaming (HAS) is today the number one video technology for over-the-top video distribution. In HAS, video content is temporally divided into multiple segments and encoded at different quality levels. A client selects and retrieves per segment the most suited quality version to create a seamless playout. Despite the ability of HAS to deal with changing network conditions, HAS-based live streaming often suffers from freezes in the playout due to buffer under-run, low average quality, large camera-to-display delay, and large initial/channel-change delay. Recently, IETF has standardized HTTP/2, a new version of the HTTP protocol that provides new features for reducing the page load time in Web browsing. In this paper, we present ten novel HTTP/2-based methods to improve the quality of experience of HAS. Our main contribution is the design and evaluation of a push-based approach for live streaming in which super-short segments are pushed from server to client as soon as they become available. We show that with an RTT of 300 ms, this approach can reduce the average server-todisplay delay by 90.1 % and the average start-up delay by 40.1 %.
Over the last years, streaming of multimedia content has become more prominent than ever. To meet increasing user requirements, the concept of HTTP Adaptive Streaming (HAS) has recently been introduced. In HAS, video content is temporally divided into multiple segments, each encoded at several quality levels. A rate adaptation heuristic selects the quality level for every segment, allowing the client to take into account the observed available bandwidth and the buffer filling level when deciding the most appropriate quality level for every new video segment. Despite the ability of HAS to deal with changing network conditions, a low average quality and a large camera-to-display delay are often observed in live streaming scenarios. In the meantime, the HTTP/2 protocol was standardized in February 2015, providing new features which target a reduction of the page loading time in web browsing. In this paper, we propose a novel push-based approach for HAS, in which HTTP/2's push feature is used to actively push segments from server to client. Using this approach with video segments with a sub-second duration, referred to as super-short segments, it is possible to reduce the startup time and end-to-end delay in HAS live streaming. Evaluation of the proposed approach, through emulation of a multi-client scenario with highly variable bandwidth and latency, shows that the startup time can be reduced with 31.2% compared to traditional solutions over HTTP/1.1 in mobile, high-latency networks. Furthermore, the end-to-end delay in live streaming scenarios can be reduced with 4 s, while providing the content at similar video quality.
HTTP Adaptive Streaming (HAS) represents the dominant technology to deliver videos over the Internet, due to its ability to adapt the video quality to the available bandwidth. Despite that, HAS clients can still suffer from freezes in the video playout, the main factor influencing users' Quality of Experience (QoE). To reduce video freezes, we propose a network-based framework, where a network controller prioritizes the delivery of particular video segments to prevent freezes at the clients. This framework is based on OpenFlow, a widely adopted protocol to implement the SoftwareDefined Networking principle. The main element of the controller is a Machine Learning (ML) engine based on the Random Undersampling Boosting algorithm and fuzzy logic, which can detect when a client is close to a freeze and drive the network prioritization to avoid it. This decision is based on measurements collected from the network nodes only, without any knowledge on the streamed video or on the client's characteristics.In this paper, we detail the design of the proposed ML-based framework and compare its performance with other benchmarking HAS solutions, under various video streaming scenarios. Particularly, we show through extensive experimentation that the proposed approach can reduce video freezes and freeze time with about 65% and 45% respectively, when compared to benchmarking algorithms. These results represent a major improvement for the QoE of the users watching multimedia content online.
Abstract-HTTP Adaptive Streaming (HAS) is becoming the de-facto standard for video streaming services over the Internet. In HAS, each video is segmented and stored in different qualities. Rate adaptation heuristics, deployed at the client, allow the most appropriate quality level to be dynamically requested, based on the current network conditions. Current heuristics under-perform when sudden bandwidth drops occur, therefore leading to freezes in the video play-out, the main factor influencing users' Quality of Experience (QoE). In this article, we propose an Openflowbased framework capable of increasing clients' QoE by reducing video freezes. An Openflow-controller is in charge of introducing prioritized delivery of HAS segments, based on feedback collected from both the network nodes and the clients. To reduce the sideeffects introduced by prioritization on the bandwidth estimation of the clients, we introduce a novel mechanism to inform the clients about the prioritization status of the downloaded segments without introducing overhead into the network. This information is then used to correct the estimated bandwidth in case of prioritized delivery. By evaluating this novel approach through emulation, under varying network conditions and in several multi-client scenarios, we show how the proposed approach can reduce freezes up to 75% compared to state-of-the-art heuristics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.