Data-parallel programming languages are an important component in today's parallel computing landscape. Among those are domain-specific languages like shading languages in graphics (HLSL, GLSL, RenderMan, etc.) and "generalpurpose" languages like CUDA or OpenCL. Current implementations of those languages on CPUs solely rely on multithreading to implement parallelism and ignore the additional intra-core parallelism provided by the SIMD instruction set of those processors (like Intel's SSE and the upcoming AVX or Larrabee instruction sets). In this paper, we discuss several aspects of implementing data-parallel languages on machines with SIMD instruction sets. Our main contribution is a language-and platform-independent code transformation that performs whole-function vectorization on low-level intermediate code given by a control flow graph in SSA form. We evaluate our technique in two scenarios: First, incorporated in a compiler for a domain-specific language used in real-time ray tracing. Second, in a stand-alone OpenCL driver. We observe average speedup factors of 3.9 for the ray tracer and factors between 0.6 and 5.2 for different OpenCL kernels.
Data-parallel languages like OpenCL and CUDA are an important means to exploit the computational power of today's computing devices. In this paper, we deal with two aspects of implementing such languages on CPUs: First, we present a static analysis and an accompanying optimization to exclude code regions from control-flow to dataflow conversion, which is the commonly used technique to leverage vector instruction sets. Second, we present a novel technique to implement barrier synchronization. We evaluate our techniques in a custom OpenCL CPU driver which is compared to itself in different configurations and to proprietary implementations by AMD and Intel. We achieve an average speedup factor of 1.21 compared to naïve vectorization and additional factors of 1.15-2.09 for suited kernels due to the optimizations enabled by our analysis. Our best configuration achieves an average speedup factor of over 2.5 against the Intel driver.
First and foremost, I want to express my gratitude towards my advisor Prof. Dr. Sebastian Hack. Not only did he give me the opportunity to pursue a PhD in his group, he also gave me all the necessary freedom, boundaries, and advice to finally succeed. Furthermore, I would like to thank Prof. Dr. Dr. h.c. mult. Reinhard Wilhelm for reviewing this thesis, and Prof. Dr. Christoph Weidenbach and Dr. Jörg Herter for serving on my committee. I also want to offer my special thanks to Prof. Dr. Philipp Slusallek. He first introduced me to working in research during my undergraduate studies and since then has accompanied-with his typical, contagious enthusiasm-most of the projects that I have been involved in. Another special thanks go to Dr. Ayal Zaks for a lot of valuable feedback and for the honor of writing the foreword. It is a pleasure to thank the following people that I met during my time at university:
Power consumption is a prevalent issue in current and future computing systems. SIMD processors amortize the power consumption of managing the instruction stream by executing the same instruction in parallel on multiple data. Therefore, in the past years, the SIMD width has steadily increased, and it is not unlikely that it will continue to do so. In this article, we experimentally study the influence of the SIMD width to the execution of data-parallel programs. We investigate how an increasing SIMD width (up to 1024) influences control-flow divergence and memory-access divergence, and how well techniques to mitigate them will work on larger SIMD widths. We perform our study on 76 OpenCL applications and show that a group of programs scales well up to SIMD width 1024, whereas another group of programs increasingly suffers from controlflow divergence. For those programs, thread regrouping techniques may become increasingly important for larger SIMD widths. We show what average speedups can be expected when increasing the SIMD width. For example, when switching from scalar execution to SIMD width 64, one can expect a speedup of 60.11, which increases to 62.46 when using thread regrouping. We also analyze the frequency of regular (uniform, consecutive) memory access patterns and observe a monotonic decrease of regular memory accesses from 82.6% at SIMD width 4 to 43.1% at SIMD width 1024.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.