Web prefetch is a technique aimed at reducing the user's perceived latency by downloading, during the navigation idle times, the web objects before the user asks for them. Despite different research efforts can be found in the literature about this subject there are few implementations for real environments. We proposed a real framework that provides web prefetching on current web client and server software working under the standard HTTP 1.1 protocol. This framework also provides detailed statistics which are very convenient for performance evaluation studies. In this paper we evaluate and compare the performance of different prediction algorithms in real conditions showing the usefulness of our proposal for this kind of environments.
Despite the wide and intensive research efforts focused on web prediction and prefetching techniques aimed to reduce user's perceived latency, few attempts to implement and use them in real environments have been done, mainly due to their complexity and supposed limitations that low user available bandwidths imposed few years ago. Nevertheless, current user bandwidths open a new scenario for prefetching that becomes again an interesting option to improve web performance. This paper presents Delfos, a framework to perform web predictions and prefetching on a real environment that tries to cover the existing gap between research and praxis. Delfos is integrated in the web architecture without modifying the standard HTTP 1.1 protocol, and acts inserting predictions in the web server side, while prefetchs are carried out by the client. In addition, it can be also used as a flexible framework to evaluate and compare existing prefetching techniques and algorithms and to assist in the design of new ones because it provides detailed statistics reports.
The prompt availability of up-to-date economic indicators is crucial to monitor the economy and to steer the design of policies for promoting business innovation and raising firm competitiveness. Economic indicators usually suffer important lags since they are commonly obtained from official databases or from interviews to a sample of agents; thus limiting the representativeness and usefulness of the information. In a context in which the presence of companies in the World Wide Web is almost an obligation to succeed, corporate websites are connected, in some way, to the firm economic activity. On the basis of this relation, this paper proposes an intelligent system that analyzes corporate websites to produce web indicators related to the economic activity of the firms. This system has been successfully implemented and applied to infer company size characteristics from data gathered from corporate websites. Our results show that relatively large companies provide web content in a foreign language and use proprietary web servers.
Web user behavior has widely changed over the last years. To perform precise and up-to-date web user behavior characterization is important to carry out representative web performance studies. In this sense, it is valuable to capture detailed information about the user's experience, which permits to perform a fine grain characterization.Two main types of tools are distinguishable: complex commercial software tools like workload generators and academic tools. The latter mainly concentrate on the development of windows applications which gather web events (e.g., browser events) or tools modifying a part of the web browser code.In this paper, we present CARENA, a client-side browser-embedded tool to capture and replay user navigation sessions. Like some commercial software packages our tool captures information about the user session, which can be used later to replay or mimic the gathered user navigation. Nevertheless, unlike these software packages, our tool emulates the original user think times since these times are important to obtain precise and reliable performance results. Among the main features of CARENA are: multiplatform, open source, lightweight, standards based, easily installable and usable, programmed in JavaScript and XUL.
Web prefetching techniques are an attractive solution to reduce the user-perceived latency. These techniques are driven by a prediction engine or algorithm that guesses following actions of web users. A large amount of prediction algorithms has been proposed since the first prefetching approach was published, although it is only over the last two or three years when they have begun to be successfully implemented in commercial products. These algorithms can be implemented in any element of the web architecture and can use a wide variety of information as input. This affects their structure, data system, computational resources and accuracy. The knowledge of the input information and the understanding of how it can be handled to make predictions can help to improve the design of current prediction engines, and consequently prefetching techniques.This paper analyzes fifty of the most relevant algorithms proposed along 15 years of prefetching research and proposes a taxonomy where the algorithms are classified according to the input data they use. For each group, the main advantages and shortcomings are highlighted.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.