This paper presents an experimental comparison of a number of different algorithms for computing the Deluanay triangulation. The algorithms examined are: Dwyer's divide and conquer algorithm, Fortune's sweepline algorithm, several versions of the incremental algorithm (including one by Ohya, Iri, and Murota, a new bucketing-based algorithm described in this paper, and Devillers's version of a Delaunay-tree based algorithm that appears in LEDA), an algorithm that incrementally adds a correct Delaunay triangle adjacent to a current triangle in a manner similar to gift wrapping algorithms for convex hulls, and Barber's convex hull based algorithm.Most of the algorithms examined are designed for good performance on uniformly distributed sites. However, we also test implementations of these algorithms on a number of non-uniform distibutions. The experiments go beyond measuring total running time, which tends to be machine-dependent. We also analyze the major high-level primitives that algorithms use and do an experimental analysis of how often implementations of these algorithms perform each operation.
This paper presents an experimental comparison of a number of different algorithms for computing the Deluanay triangulation. The algorithms examined are: Dwyer's divide and conquer algorithm, Fortune's sweepline algorithm, several versions of the incremental algorithm (including one by Ohya, Iri, and Murota, a new bucketing-based algorithm described in this paper, and Devillers's version of a Delaunay-tree based algorithm that appears in LEDA), an algorithm that incrementally adds a correct Delaunay triangle adjacent to a current triangle in a manner similar to gift wrapping algorithms for convex hulls, and Barber's convex hull based algorithm. Most of the algorithms examined are designed for good performance on uniformly distributed sites. However, we also test implementations of these algorithms on a number of non-uniform distibutions. The experiments go beyond measuring total running time, which tends to be machine-dependent. We also analyze the major high-level primitives that algorithms use and do an experimental analysis of how often implementations of these algorithms perform each operation.
OF THE THESISThis dissertation develops and studies fast algorithms for solving closest point problems.Algorithms for such problems have applications in many areas including statistical classification, crystallography, data compression, and finite element analysis. In addition to a comprehensive empirical study of known sequential methods, I introduce new parallel algorithms for these problems that are both efficient and practical. I present a simple and flexible programming model for designing and analyzing parallel algorithms. Also, I describe fast parallel algorithms for nearest-neighbor searching and constructing Voronoi diagrams. Finally, I demonstrate that my algorithms actually obtain good performance on a wide variety of machine architectures.The key algorithmic ideas that I examine are exploiting spatial locality, and random sampling. Spatial decomposition provides allows many concurrent threads to work independently of one another in local areas of a shared data structure. Random sampling provides a simple way to adaptively decompose irregular problems, and to balance workload among many threads. Used together, these techniques result in effective algorithms for a wide range of geometric problems.The key experimental ideas used in my thesis are simulation and animation. I use algorithm animation to validate algorithms and gain intuition about their behavior. I model the expected performance of algorithms using simulation experiments, and some knowledge as to how much critical primitive operations will cost on a given machine. In addition, I do this without the burden of esoteric computational models that attempt to cover every possible variable in the design of a computer system. An iterative process of design, validation, and simulation delays the actual implementation until as many details as possible are accounted for. Then, further experiments are used to tune implementations for better performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.