We have developed a formal performance model for centralized and replicated architectures involving two users, giving equations for response, feedthrough, and task completion times. The model explains previous empirical results by showing that (a) low network latency favors the centralized architecture and (b) asymmetric processing powers favor the centralized architecture. In addition, it makes several new predictions, showing that under certain practical conditions, (a) centralizing the application on the slower machine may be the optimal solution, (b) centralizing the application on the faster machine is sometimes better than replicating, and (c) as the duration of the collaboration increases, the difference in performances of centralized and replicated architectures gets magnified. We have verified these predictions through new experiments for which we created synthesized logs based on parameters gathered from actual collaboration logs. Our results increase the understanding of centralized and replicated architectures and can be used by (a) users of adaptive systems to decide when to perform architecture changes, (b) users who have a choice of systems with different architectures to choose the system most suited for a particular collaboration mode (defined by the values of the collaboration parameters), and (c) users locked into a specific architecture to decide how to change the hardware and other collaboration parameters to improve performance. Related WorkUnlike in traditional computer science fields such as databases and operating systems, there has been relatively little work in the collaboration domain on studying the performance of system architectures, even though, arguably, performance is more important in this field because of the human in the eventprocessing loop. As mentioned earlier, existing studies have been confined to gathering empirical data. Moreover, there have been very few studies that have directly targeted collaboration. One can, however, make some collaboration implications indirectly from studies of distributed window systems.Nieh, Yang, Novik et al. (2000) conducted experiments that measured the relative performances of two distributed window systems, the Linux implementation of VNC (Hopper, 1998) and Microsoft's Windows 2000 RDP implementation. The architecture used was essentially a two-user centralized architecture with the user at the hosting site inactive. Such a setup gives an idea of the performance experienced by a remote user interacting with a centralized program, assuming the host site does not become a bottleneck. These studies compared two different implementations of the centralized architecture and do not addresses the relative performances of different architecture configurations.Wong and Seltzer (2000) measured the network load for various remote user operations. Danskin and Hanrahan (1994) measured the frequencies of these operations. Together, these two results give an idea of the actual bandwidth requirements for a variety of remote desktop tasks. Two other studi...
Two important performance metrics in collaborative systems are local and remote response times. Previous analytical and simulation work has shown that these response times depend on three important factors: processing architecture, communication architecture, and scheduling of tasks dictated by these two architectures. We show that it is possible to create a system that improves response times by dynamically adjusting these three system parameters in response to changes to collaboration parameters such as new users joining and network delays changing. We present practical approaches for collecting collaboration parameters, computing multicast overlays, applying analytical models of previous work, preserving coupling semantics during optimizations, and keeping overheads low. Simulations and experiments show that the system improves performance in practical scenarios.
No abstract
We evaluate response times, in N-user collaborations, of the popular centralized (client-server) and replicated (peer-to-peer) architectures, and a hybrid architecture in which each replica serves a cluster of nearby clients. Our work consists of definitions of aspects of these architectures that have previously been unspecified but must be resolved for the analysis, a formal evaluation model, and a set of experiments. The experiments are used to define the parameters of and validate the formal analysis. In addition, they compare the performances, under the three architectures, of existing data-centric, logic-centric, and stateless shared components. We show that under realistic conditions, a small number of users, high intra-cluster network delays, and large output processing and transmission costs favor the replicated architecture, large input size favors the centralized architecture, high inter-cluster network delays favor the hybrid architecture, and high input processing and transmission costs, low think times, asymmetric processing powers, and logic-intensive applications favor both the centralized and hybrid architectures. We use our validated formal model to make useful predictions about the performance of the three kinds of architectures under realistic scenarios we could not create in lab experiments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.