Web performance is a hot topic, as many studies have shown a strong correlation between slow webpages and loss of revenue due to user dissatisfaction. Front and center in Page Load Time (PLT) optimization is the order in which resources are downloaded and processed. The new HTTP/2 speciication includes dedicated resource prioritization provisions, to be used in tandem with resource multiplexing over a single, well-illed TCP connection. However, little is yet known about its application by browsers and its impact on page load performance.This article details an extensive survey of modern User Agent implementations, with the conclusion that the major vendors all approach HTTP/2 prioritization in widely diferent ways, from naive (Safari, IE, Edge) to complex (Chrome, Firefox). We investigate the performance efect of these discrepancies with a full-factorial experimental evaluation involving eight prioritization algorithms, two of-the-shelf User Agents, 40 realistic webpages, and ive heterogeneous (emulated) network conditions. We ind that in general the complex approaches yield the best results, while naive schemes can lead to over 25% slower median visual load times. Also, prioritization is found to matter most for heavy-weight pages. Finally, it is ascertained that achieving PLT optimizations via generic serverside HTTP/2 re-prioritization schemes is a non-trivial task and that their performance is inluenced by the implementation intricacies of individual browsers.
Abstract:Web page performance is becoming increasingly important for end users but also more difficult to provide by web developers, in part because of the limitations of the legacy HTTP/1 protocol. The new HTTP/2 protocol was designed with performance in mind, but existing work comparing its improvements to HTTP/1 often shows contradictory results. It is unclear for developers how to profit from HTTP/2 and whether current HTTP/1 best practices such as resource concatenation, resource embedding, and hostname sharding should still be used. In this work we introduce the Speeder framework, which uses established tools and software to easily and reproducibly test various setup permutations. We compare and discuss results over many parameters (e.g., network conditions, browsers, metrics), both from synthetic and realistic test cases. We find that in most non-extreme cases HTTP/2 is on a par with HTTP/1 and that most HTTP/1 best practices are applicable to HTTP/2. We show that situations in which HTTP/2 currently underperforms are mainly caused by inefficiencies in implementations, not due to shortcomings in the protocol itself.
The QUIC and HTTP/3 protocols are powerful but complex and difficult to debug and analyse. Our previous work proposed the qlog format for structured endpoint logging to aid in taming this complexity. This follow-up study evaluates the real-world implementations, uses and deployments of qlog and our associated qvis tooling in academia and industry. Our survey among 28 QUIC experts shows high community involvement, while Facebook confirms qlog can handle Internet scale. Lessons learned from researching 16 QUIC+HTTP/3 and five TCP+TLS+HTTP/2 implementations demonstrate that qlog and qvis are essential tools for performing root-cause analysis when debugging modern Web protocols. CCS CONCEPTS • Networks → Transport protocols; Protocol testing and verification; Network protocol design.
Abstract-Cloud gaming, in which the processing power of a datacenter-based infrastructure is utilized versus local resources, is a popular topic in research. This technology is successfully applied by vendors to enable low-end hardware to deliver a similar gameplay experience to state of the art consoles. Many works in literature have focused on the quantitative aspects of the technology (i.e. delay measurements, visual quality determination etc), but the qualitative factors have not received a similar systematic treatment. Games are typically classified in terms of their gameplay into a distinct category or genre, including action, puzzle, strategy and racing games. In this work, a qualitative comparison of these genres is presented based on a common testing methodology which combines both objective (based on physiological measurements) and subjective (based on user evaluation) approaches. While in normal networked games, only multiplayer experiences are subject to the detrimental effect of delay, the nature of cloud gaming may result in an impact on singleplayer experiences as well. Results from this analysis hint at the fact that there is a similarity in delay-sensitiveness over the different genres in both singleplayer cloud gaming setups and traditional networked multiplayer games. More in particular, results show that action-oriented games are more sensitive to network delay in both setups when compared to other genres.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.