Reducing network latency in mobile applications is an effective way of improving the mobile user experience and has tangible economic benefits. This paper presents PALOMA, a novel clientcentric technique for reducing the network latency by prefetching HTTP requests in Android apps. Our work leverages string analysis and callback control-flow analysis to automatically instrument apps using PALOMA's rigorous formulation of scenarios that address "what" and "when" to prefetch. PALOMA has been shown to incur significant runtime savings (several hundred milliseconds per prefetchable HTTP request), both when applied on a reusable evaluation benchmark we have developed and on real applications. INTRODUCTIONIn mobile computing, user-perceived latency is a critical concern as it directly impacts user experience and often has severe economic consequences. A recent report shows that a majority of mobile users would abandon a transaction or even delete an app if the response time of a transaction exceeds three seconds [6]. Google estimates that an additional 500ms delay per transaction would result in up to 20% loss of traffic, while Amazon estimates that every 100ms delay would cause 1% annual sales loss [42]. A previous study showed that network transfer is often the performance bottleneck, and mobile apps spend 34-85% of their time fetching data from the Internet [32]. A compounding factor is that mobile devices rely on wireless networks, which can exhibit high latency, intermittent connectivity, and low bandwidth [21].Reducing network latency thus becomes a highly effective way of improving the mobile user experience. In the context of mobile communication, we define latency as the response time of an HTTP request. In this paper, we propose a novel client-centric technique for minimizing the network latency by prefetching HTTP requests in mobile apps. Prefetching bypasses the performance bottleneck (in this case, network speed) and masks latency by allowing a response to a request to be generated immediately, from a local cache.
No abstract
Network latency in mobile software has a large impact on user experience, with potentially severe economic consequences. Prefetching and caching have been shown effective in reducing the latencies in browser-based systems. However, those techniques cannot be directly applied to the emerging domain of mobile apps because of the differences in network interactions. Moreover, there is a lack of research on prefetching and caching techniques that may be suitable for the mobile app domain, and it is not clear whether such techniques can be effective or whether they are even feasible. This paper takes the first step toward answering these questions by conducting a comprehensive study to understand the characteristics of HTTP requests in over 1,000 popular Android apps. Our work focuses on the prefetchability of requests using static program analysis techniques and cacheability of resulting responses. We find that there is a substantial opportunity to leverage prefetching and caching in mobile apps, but that suitable techniques must take into account the nature of apps' network interactions and idiosyncrasies such as untrustworthy HTTP header information. Our observations provide guidelines for developers to utilize prefetching and caching schemes in app development, and motivate future research in this area.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.