Ewen et al. (54) (75) (73) (21) 22) (51) 52 58)
Future transaction processing systems may have substantially higher levels of concurrency due to reasons which include: (1) increasing disparity between processor speeds and data access latencies, (2) large numbers of processors, and (3) distributed databases. Another influence is the trend towards longer or more complex transactions. A possible consequence is substantially more data contention, which could limit total achievable throughput. In particular, it is known that the usual locking method of concurrency control is not well suited to environments where data contention is a significant factor. Here we consider a number of concurrency control concepts and transaction scheduling techniques that are applicable to high contention environments, and that do not rely on database semantics to reduce contention. These include access invariance and its application to prefetching of data, approximations to essential blocking such as wait depth limited scheduling, and phase dependent control. The performance of various concurrency control methods based on these concepts are studied using detailed simulation models. The results indicate that the new techniques can offer substantial benefits for systems with high levels of data contention.
Several technologies are leveraged to establish an architecture for a low-cost, highperformance memory controller and memory system that more than double the effective size of the installed main memory without significant added cost. This architecture is the first of its kind to employ real-time main-memory content compression at a performance competitive with the best the market has to offer. A large low-latency shared cache exists between the processor bus and a content-compressed main memory. Highspeed, low-latency hardware performs realtime compression and decompression of data traffic between the shared cache and the main memory. Sophisticated memory management hardware dynamically allocates main-memory storage in small sectors to accommodate storing the variable-sized compressed data without the need for "garbage" collection or significant wasted space due to fragmentation. Though the main-memory compression ratio is limited to the range 1:1-64:1, typical ratios range between 2:1 and 6:1, as measured in "real-world" system applications. Architecture Conventional "commodity" computer systems typically share a common architecture, in which a collection of processors are connected to a common SDRAM-based
Methods are presented for the encoding of information into binary sequences in which the number of ZEROS occurring between each pair of successive ONES has both an upper and a lower bound. The techniques, based on the state structure of the constraints, permit the construction of short, efficient codes with favorable error-propagation-limiting properties.
Given the pairwise probability of conflict p among transactions in a transaction processing system, together with the total number of concurrent transactions n, the effective level of concurrency E(n,p) is defined as the expected number of the n transactions that can run concurrently and actually do useful work. Using a random graph model of concurrency, we show for three general classes of concurrency control methods, examples of which are (1) standard locking, (2) strict priority scheduling, and (3) optimistic methods, that (1) EE + 0 for standard locking methods, (2) E 5 l/p for strict priority scheduling methods, and (3) E + co for optimistic methods. Also found are bounds on E in the case where conflicts are analyzed so as to maximize E.The predictions of the random graph model are confirmed by simulations of an abstract transaction processing system. In practice, though, there is a price to pay for the increased effective level of concurrency of methods (2) and (3): using these methods there is more wasted work (i.e., more steps executed by transactions that are later aborted). In response to this problem, three new concurrency control methods suggested by the random graph model analysis are developed. Two of these, called (a) running priority and (b) older or running priority, are shown by the simulation results to perform better than the previously known methods (l)- (3) for relatively large n or large p, in terms of achieving a high effective level of concurrency at a comparatively small cost in wasted work.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.