In a 'shared-nothing' parallel computer, each processor has its own memory and disks and processors communicate by passing messages through an interconnect. Many academic researchers, and some vendors, assert that shared-nothingness is the 'consensus' architecture for parallel DBMSs. This alleged consensus is used as a justification for simulation models, algorithms, research prototypes and even marketing campaigns. We argue that shared-nothingness is no longer the consensus hardware architecture and that hardware resource sharing is a poor basis for categorising parallel DBMS software architectures if one wishes to compare the performance characteristics of parallel DBMS products.
Joins are the most expensive and performance-critical operations in relational database systems. In this thesis, we investigate processing techniques for joins that are based on a temporal intersection condition. Intuitively, such joins are used whenever one wants to match data from two or more relations that is valid at the same time. This work is divided into two parts. First, we analyse techniques that have been proposed for equi-joins. Some of them have already been adapted for temporal join processing by other authors. However, hash-based and parallel techniques-which are usually the most efficient ones in the context of equijoins-have only found little attraction and leave several temporal-specific issues unresolved. Hash-based and parallel techniques are based on explicit symmetric partitioning. In the case of an equi-join condition, partitioning can guarantee that the relations are split into disjoint fragments; in the case of a temporal intersection condition, partitioning usually results in non-disjoint fragments with a large number of tuples being replicated between fragments. This causes a considerable overhead for partitioned temporal join processing. This problem is an instance of the 'min-max dilemma': minimising the number of replicated tuples means minimising the number of fragments, thus minimising the degree of parallelism-however, increasing the number of fragments and therefore the degree of parallelism also increases the number of tuple replications. We analyse this problem and show that there is an algorithm of polynomial time complexity that computes an optimal solution for the interval partitioning problem (IP). This result concludes the analytical part. In the second, the synthetical part of this work, we focus on the conclusions that can be drawn from the results of the first part. We propose and develop an optimisation process that • analyses the temporal relations that participate in a temporal join, • proposes several possible partitions for these relations, • analyses these partitions and predicts their performance implications on the basis of a parameterised cost model, and. chooses the cheapest partition to process the temporal join. We also show how this process can be efficiently implemented by using a new index structure, called the IP-table. The thesis is concluded by a thorough experimental evaluation of the optimisation process and a chapter that shows the suitability of IP-tables in a wider context of temporal query optimisation, namely using them to estimate selectivities of temporal join conditions. After over 1000 days of PhD research, around 500 pages of thesis, paper and report writing, I can finally add the final and the most enjoyable part: it is these lines in which I can thank the many people that have enabled me to produce this work by providing the fruitful environment that I have had in the last three years. First of all, I have to thank Isabel, my wife, for her endless patience and support and for cheering me up when things did not run as smoothly as my cabeza ...
E-commerce applications have imposed a huge set of new paradigms and challenges to the entire soft-and hardware community. In this papel; we focus on the changes and challenges that data warehouses already face in this context. SAP provides with its Business Information Warehouse (B W} a basic infrastructure elementfor its e-commerce platform mySAP.com. We will summarise some of the experience that we have made when adjusting and extending BW to fit the requirements of mySAP.com.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.