We investigate the problem of atomic commit in transactional database systems built on top of Distributed Hash Tables. Therefore we present a framework for DHTs to provide strong data consistency and transactions on data stored in a decentralized way. To solve the atomic commit problem within distributed transactions, we propose to use an adaption of Paxos commit as a non-blocking algorithm. We exploit the symmetric replication technique existing in the DKS DHT to determine which nodes are necessary to execute the commit algorithm. By doing so, we achieve a lower number of communication rounds in contrast to applying traditional Three-Phase-Commit protocols. We also show how the proposed solution can cope with dynamism due to churn in DHTs. Our solution works correctly relying only on an inaccurate failure detection of node failure, what is necessary for systems running over the Internet.
Data consistency can be violated in Distributed Hash Tables (DHTs) due to inconsistent lookups. In this paper, we identify the events leading to inconsistent lookups and inconsistent responsibilities for a key. We find the inaccuracy of failure detectors as the main reason for inconsistencies. By simulations with inaccurate failure detectors, we study the probability of reaching a system configuration which may lead to inconsistent data. We analyze majority-based algorithms for operations on replicated data. To ensure that concurrent operations do not violate consistency, they have to use non-disjoint sets of replicas. We analytically derive the probability of concurrent operations including disjoint replica sets. By combining the simulation and analytical results, we show that the probability for a violation of data consistency is negligibly low for majority-based algorithms in DHTs.
Structured Overlay Networks (SONs) provide a promising platform for high performance applications since they are scalable, fault-tolerant and self-managing. SONs provide lookup services that map keys to nodes that can be used as processing or storage resources. In SONs, lookups for a key may return inconsistent results. Consequently, it is difficult to provide consistent data services on top of SONs that build on key-based search. In this paper, we study the frequency of occurrence of inconsistent lookups. We show that the affect of lookup inconsistencies can be reduced by using node responsibilities. We present our results as a trade-off between consistency and availability of keys.
Structured Overlay Networks provide a promising platform for high performance applications since they are scalable, fault-tolerant and self-managing. Structured overlays provide lookup services that map keys to nodes that can be used as processing or storage resources. The lookups for a key may return inconsistent results. Consequently, it is nontrivial to provide consistent data services on the top of structured overlays that are built on key-based search. In this paper, we study the frequency of occurrence of inconsistent lookups. We show that the effect of lookup inconsistencies can be reduced by assigning responsibility of key intervals to nodes. We present our results as a trade-off between consistency and availability of keys. Further, since many distributed applications employ quorum techniques at their core, we analyze the probability that majority-based quorum techniques will function correctly in a structured overlay with inconsistent lookups. Our analysis shows that the probability of majority-based algorithms to function correctly despite lookup inconsistencies is high.
Abstract. We investigate the problem of ensuring and maximizing performance guarantees for applications suffering software aging. Our focus is the optimization of the minimum and average performance of such applications in virtualized and non-virtualized scenario. The key technique is to use a set of simultaneously active application replica and to optimize their rejuvenation schedules. We derive an analytical method for maximizing the minimum "any-time" performance for certain cases and propose a heuristic method for maximization of minimum and average performance for all others. To evaluate our method we perform extensive studies on two applications: aging profiles of Apache Axis 1.3 and the aging data of the TPC-W benchmark instrumented with a memory leak injector. The results show that our approach is a practical way to ensure uninterrupted availability and optimize performance for even strongly aging applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.