Irregular applications, which manipulate large, pointer-based data structures like graphs, are difficult to parallelize manually. Automatic tools and techniques such as restructuring compilers and runtime speculative execution have failed to uncover much parallelism in these applications, in spite of a lot of effort by the research community. These difficulties have even led some researchers to wonder if there is any coarse-grain parallelism worth exploiting in irregular applications.In this paper, we describe two real-world irregular applications: a Delaunay mesh refinement application and a graphics application that performs agglomerative clustering. By studying the algorithms and data structures used in these applications, we show that there is substantial coarse-grain, data parallelism in these applications, but that this parallelism is very dependent on the input data and therefore cannot be uncovered by compiler analysis. In principle, optimistic techniques such as thread-level speculation can be used to uncover this parallelism, but we argue that current implementations cannot accomplish this because they do not use the proper abstractions for the data structures in these programs.These insights have informed our design of the Galois system, an object-based optimistic parallelization system for irregular applications. There are three main aspects to Galois: (1) a small number of syntactic constructs for packaging optimistic parallelism as iteration over ordered and unordered sets, (2) assertions about methods in class libraries, and (3) a runtime scheme for detecting and recovering from potentially unsafe accesses to shared memory made by an optimistic computation.We show that Delaunay mesh generation and agglomerative clustering can be parallelized in a straight-forward way using the Galois approach, and we present experimental measurements to show that this approach is practical. These results suggest that Galois is a practical approach to exploiting data parallelism in irregular programs.
Translucency is an important aspect of material appearance. To some extent, humans are able to estimate translucency in a consistent way across different shapes and lighting conditions, i.e., to exhibit translucency constancy. However, Fleming and Bülthoff (2005) have shown that that there can be large failures of constancy, with lighting direction playing an important role. In this paper, we explore the interaction of shape, illumination, and degree of translucency constancy more deeply by including in our analysis the variations in translucent appearance that are induced by the shape of the scattering phase function. This is an aspect of translucency that has been largely neglected. We used appearance matching to measure how perceived translucency depends on both lighting and phase function. The stimuli were rendered scenes that contained a figurine and the lighting direction was represented by spherical harmonic basis function. Observers adjusted the density of a figurine under one lighting condition to match the material property of a target figurine under another lighting condition. Across the trials, we varied both the lighting direction and the phase function of the target. The phase functions were sampled from a 2D space proposed by Gkioulekas et al. (2013) to span an important range of translucent appearance. We find the degree of translucency constancy depends strongly on the phase function's location in the same 2D space, suggesting that the space captures useful information about different types of translucency. We also find that the geometry of an object is important. We compare the case of a torus, which has a simple smooth shape, with that of the figurine, which has more complex geometric features. The complex shape shows a greater range of apparent translucencies and a higher degree of constancy failure. In summary, humans show significant failures of translucency constancy across changes in lighting direction, but the effect depends both on the shape complexity and the translucency phase function.
Irregular applications, which manipulate large, pointer-based data structures like graphs, are difficult to parallelize manually. Automatic tools and techniques such as restructuring compilers and runtime speculative execution have failed to uncover much parallelism in these applications, in spite of a lot of effort by the research community. These difficulties have even led some researchers to wonder if there is any coarse-grain parallelism worth exploiting in irregular applications.In this paper, we describe two real-world irregular applications: a Delaunay mesh refinement application and a graphics application that performs agglomerative clustering. By studying the algorithms and data structures used in these applications, we show that there is substantial coarse-grain, data parallelism in these applications, but that this parallelism is very dependent on the input data and therefore cannot be uncovered by compiler analysis. In principle, optimistic techniques such as thread-level speculation can be used to uncover this parallelism, but we argue that current implementations cannot accomplish this because they do not use the proper abstractions for the data structures in these programs.These insights have informed our design of the Galois system, an object-based optimistic parallelization system for irregular applications. There are three main aspects to Galois: (1) a small number of syntactic constructs for packaging optimistic parallelism as iteration over ordered and unordered sets, (2) assertions about methods in class libraries, and (3) a runtime scheme for detecting and recovering from potentially unsafe accesses to shared memory made by an optimistic computation.We show that Delaunay mesh generation and agglomerative clustering can be parallelized in a straight-forward way using the Galois approach, and we present experimental measurements to show that this approach is practical. These results suggest that Galois is a practical approach to exploiting data parallelism in irregular programs.
Multidimensional lightcuts is a new scalable method for efficiently rendering rich visual effects such as motion blur, participating media, depth of field, and spatial anti-aliasing in complex scenes. It introduces a flexible, general rendering framework that unifies the handling of such effects by discretizing the integrals into large sets of gather and light points and adaptively approximating the sum of all possible gather-light pair interactions.We create an implicit hierarchy, the product graph, over the gather-light pairs to rapidly and accurately approximate the contribution from hundreds of millions of pairs per pixel while only evaluating a tiny fraction (e.g., 200--1,000). We build upon the techniques of the prior Lightcuts method for complex illumination at a point, however, by considering the complete pixel integrals, we achieve much greater efficiency and scalability.Our example results demonstrate efficient handling of volume scattering, camera focus, and motion of lights, cameras, and geometry. For example, enabling high quality motion blur with 256x temporal sampling requires only a 6.7x increase in shading cost in a scene with complex moving geometry, materials, and illumination.
Irregular applications, which manipulate large, pointer-based data structures like graphs, are difficult to parallelize manually. Automatic tools and techniques such as restructuring compilers and runtime speculative execution have failed to uncover much parallelism in these applications, in spite of a lot of effort by the research community. These difficulties have even led some researchers to wonder if there is any coarse-grain parallelism worth exploiting in irregular applications.In this paper, we describe two real-world irregular applications: a Delaunay mesh refinement application and a graphics application that performs agglomerative clustering. By studying the algorithms and data structures used in these applications, we show that there is substantial coarse-grain, data parallelism in these applications, but that this parallelism is very dependent on the input data and therefore cannot be uncovered by compiler analysis. In principle, optimistic techniques such as thread-level speculation can be used to uncover this parallelism, but we argue that current implementations cannot accomplish this because they do not use the proper abstractions for the data structures in these programs.These insights have informed our design of the Galois system, an object-based optimistic parallelization system for irregular applications. There are three main aspects to Galois: (1) a small number of syntactic constructs for packaging optimistic parallelism as iteration over ordered and unordered sets, (2) assertions about methods in class libraries, and (3) a runtime scheme for detecting and recovering from potentially unsafe accesses to shared memory made by an optimistic computation.We show that Delaunay mesh generation and agglomerative clustering can be parallelized in a straight-forward way using the Galois approach, and we present experimental measurements to show that this approach is practical. These results suggest that Galois is a practical approach to exploiting data parallelism in irregular programs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.