Transactions can simplify distributed applications by hiding data distribution, concurrency, and failures from the application developer. Ideally the developer would see the abstraction of a single large machine that runs transactions sequentially and never fails. This requires the transactional subsystem to provide opacity (strict serializability for both committed and aborted transactions), as well as transparent fault tolerance with high availability.As even the best abstractions are unlikely to be used if they perform poorly, the system must also provide high performance.Existing distributed transactional designs either weaken this abstraction or are not designed for the best performance within a data center. This paper extends the design of FaRM -which provides strict serializability only for committed transactions -to provide opacity while maintaining FaRM's high throughput, low latency, and high availability within a modern data center. It uses timestamp ordering based on real time with clocks synchronized to within tens of microseconds across a cluster, and a failover protocol to ensure correctness across clock master failures. FaRM with opacity can commit 5.4 million neworder transactions per second when running the TPC-C transaction mix on 90 machines with 3-way replication.
Observational indications support the hypothesis that many large earthquakes are preceded by accelerating-decelerating seismic release rates which are described by a power law time to failure relation. In the present work, a unified theoretical framework is discussed based on the ideas of non-extensive statistical physics along with fundamental principles of physics such as the energy conservation in a faulted crustal volume undergoing stress loading. We define a generalized Benioff strain function Ω ξ ( t ) = ∑ i = 1 n ( t ) E i ξ ( t ) , where Ei is the earthquake energy, 0 ≤ ξ ≤ 1 . and a time-to-failure power-law of Ω ξ ( t ) derived for a fault system that obeys a hierarchical distribution law extracted from Tsallis entropy. In the time-to-failure power-law followed by Ω ξ ( t ) the existence of a common exponent mξ which is a function of the non-extensive entropic parameter q is demonstrated. An analytic expression that connects mξ with the Tsallis entropic parameter q and the b value of Gutenberg—Richter law is derived. In addition the range of q and b values that could drive the system into an accelerating stage and to failure is discussed, along with precursory variations of mξ resulting from the precursory b-value anomaly. Finally our calculations based on Tsallis entropy and the energy conservation give a new view on the empirical laws derived in the literature, the associated average generalized Benioff strain rate during accelerating period with the background rate and connecting model parameters with the expected magnitude of the main shock.
A priori, locking seems easy: To protect shared data from concurrent accesses, it is sufficient to lock before accessing the data and unlock after. Nevertheless, making locking efficient requires finetuning (a) the granularity of locks and (b) the locking strategy for each lock and possibly each workload. As a result, locking can become very complicated to design and debug.We present GLS, a middleware that makes lock-based programming simple and effective. GLS offers the classic lock-unlock interface of locks. However, in contrast to classic lock libraries, GLS does not require any effort from the programmer for allocating and initializing locks, nor for selecting the appropriate locking strategy. With GLS, all these intricacies of locking are hidden from the programmer. GLS is based on GLK, a generic lock algorithm that dynamically adapts to the contention level on the lock object. GLK is able to deliver the best performance among simple spinlocks, scalable queue-based locks, and blocking locks. Furthermore, GLS offers several debugging options for easily detecting various lockrelated issues, such as deadlocks.We evaluate GLS and GLK on two modern hardware platforms, using several software systems (i.e., HamsterDB, Kyoto Cabinet, Memcached, MySQL, SQLite) and show how GLK improves their performance by 23% on average, compared to their default locking strategies. We illustrate the simplicity of using GLS and its debugging facilities by rewriting the synchronization code for Memcached and detecting two potential correctness issues. CCS Concepts•Computing methodologies → Shared memory algorithms; Concurrent algorithms; •Computer systems organization → Multicore architectures; Keywords Locking; Adaptive Locking; Locking Middleware; Locking Runtime; Synchronization; Multi-cores; Performance * Work done while the author was at EPFL. Currently at Google. † Author names appear in alphabetical order.Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
On 27 September 2021, a shallow earthquake with focal depth of 10 km and moment magnitude Mw6.0 occurred onshore in central Crete (Greece). The evolution of possible preseismic patterns in the area of central Crete before the Mw6.0 event was investigated by applying the method of multiresolution wavelet analysis (MRWA), along with that of natural time (NT). The monitoring of preseismic patterns by critical parameters defined by NT analysis, integrated with the results of MRWA as the initiation point for the NT analysis, forms a promising framework that may lead to new universal principles that describe the evolution patterns before strong earthquakes. Initially, we apply MRWA to the interevent time series of the successive regional earthquakes in order to investigate the approach of the regional seismicity towards critical stages and to define the starting point of the natural time domain. Then, using the results of MRWA, we apply the NT analysis, showing that the regional seismicity approached criticality for a prolonged period of ~40 days before the occurrence of the Mw6.0 earthquake, when the κ1 natural time parameter reached the critical value of κ1 = 0.070, as suggested by the NT method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.