TreadMarks supports parallel computing on networks of workstations by providing the application with a shared memory abstraction. Shared memory facilitates the transition from sequential to parallel programs. After identifying possible sources of parallelism in the code, most of the data structures can be retained without change, and only synchronization needs to be added to achieve a correct shared memory parallel program. Additional transformations may be necessary to optimize performance, but this can be done in an incremental fashion. We discuss the techniques used in TreadMarks to provide e cient shared memory, and our experience with two large applications, mixed integer programming and genetic linkage analysis.
Dynamic content Web sites consist of a front-end Web server, an application server and a back-end database. In this paper we introduce distributed versioning, a new method for scaling the back-end database through replication. Distributed versioning provides both the consistency guarantees of eager replication and the scaling properties of lazy replication. It does so by combining a novel concurrency control method based on explicit versions with conflict-aware query scheduling that reduces the number of lock conflicts. We evaluate distributed versioning using three dynamic content applications: the TPC-W e-commerce benchmark with its three workload mixes, an auction site benchmark, and a bulletin board benchmark. We demonstrate that distributed versioning scales better than previous methods that provide consistency. Furthermore, we demonstrate that the benefits of relaxing consistency are limited, except for the conflict-heavy TPC-W ordering mix.
We investigate a transactional memory runtime system providing scaling and strong consistency for generic C++ and SQL applications on commodity clusters. We introduce a novel page-level distributed concurrency control algorithm, called Distributed Multiversioning (DMV). DMV automatically detects and resolves conflicts caused by data races for distributed transactions accessing shared in-memory data structures. DMV's key novelty is in exploiting the distributed data versions that naturally occur in a replicated cluster in order to avoid read-write conflicts. Specifically, DMV runs conflicting transactions in parallel on different replicas, instead of using different physical data copies within a single node as in classic multiversioning.In its most general update-anywhere configuration, DMV can be used to implement a software transactional memory abstraction for classic distributed shared memory applications. DMV supports scaling for highly multithreaded database applications as well by centralizing updates on a master replica and creating the required page versions for read-only transactions on a set of slaves. In this DMV configuration, a version-aware scheduling technique distributes the read-only transactions across the slaves in such a way to minimize version conflicts.In our evaluation, we use DMV as a lightweight approach to scaling a hash table microbenchmark workload and the industrystandard e-commerce workload of the TPC-W benchmark on a commodity cluster. Our measurements show scaling for both benchmarks. In particular, we show near-linear scaling up to 8 transactional nodes for the most common e-commerce workload, the TPC-W shopping mix. We further show that our scaling for the TPC-W e-commerce benchmark compares favorably with that of an existing coarse-grained asynchronous replication technique.
Most massively multiplayer game servers employ static partitioning of their game world into distinct mini-worlds that are hosted on separate servers. This limits cross-server interactions between players, and exposes the division of the world to players. We have designed and implemented an architecture in which the partitioning of game regions across servers is transparent to players and interactions are not limited to objects in a single region or server. This allows a finer grain partitioning, which combined with a dynamic load management algorithm enables us to better handle transient crowding by adaptively dispersing or aggregating regions from servers in response to quality of service violations.Our load balancing algorithm is aware of the spatial locality in the virtual game world. Based on localized information, the algorithm balances the load and reduces the cross server communication, while avoiding frequent reassignment of regions. Our results show that locality aware load balancing reduces the average user response time by up to a factor of 6 compared to a global algorithm that does not consider spatial locality and by up to a factor of 8 compared to static partitioning.
We introduce a novel infrastructure supporting automatic updates for dynamic content browsing on resource constrained mobile devices. Currently, the client is forced to continuously poll for updates from potentially different data sources, such as, e-commerce, on-line auctions, stock and weather sites, to stay up to date with potential changes in content. We employ a pair of proxies, located on the mobile client and on a fully-connected edge server, respectively, to minimize the battery consumption caused by wireless data transfers to and from the mobile device. The client specifies her interest in changes to specific parts of pages by highlighting portions of already loaded web pages in her browser. The edge proxy polls the web servers involved, and if relevant changes have occurred, it aggregates the updates as one batch to be sent to the client. The proxy running on the mobile device can pull these updates from the edge proxy, either on-demand or periodically, or can listen for pushed updates initiated by the edge proxy. We also use SMS messages to indicate available updates and to inform the user of which pages have changed. Our approach is fully implemented using two alternative wireless networking technologies, 802.11 and GPRS. Furthermore, we leverage our SMS feature to implement and evaluate a hybrid approach which chooses either 802.11 or GPRS depending on the size of the update batch. Our evaluation explores the data transfer savings enabled by our proxy-based infrastructure and the energy consumption when using each of the two networking capabilities and the hybrid approach. Our results show that our proxy system saves data transfers to and from the mobile device by an order of magnitude and battery consumption by up to a factor of 4.5, compared to the client-initiated continuous polling approach. Our results also show that the batching effect of our proxy reduces energy consumption even in the case where the user never visits the same page twice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.