SUMMARYThis special issue contributes to this promising field with extended and carefully reviewed versions of selected papers from two workshops, namely the 2nd Minisymposium on GPU Computing, which was held as part of the 9th International Conference on Parallel Processing and Applied Mathematics (PPAM 2011) The importance of hardware accelerators (graphics processing units (GPUs), cell, fieldprogrammable gate arrays , . . . ) is rapidly increasing in performance sensitive areas. They are particularly relevant in high throughput disciplines such as high quality 3D computer graphics and vision, real-time data stream processing, and high-performance scientific computing. The main reason behind this trend is that these accelerators can potentially yield speedups and power savings orders of magnitude higher than those obtained with optimized implementations for general-purpose CPU cores. As a result, during the past few years, these architectures have become powerful, capable, and inexpensive mainstream coprocessors useful for a wide variety of applications.The growing relevance of these devices has given place to a very rich environment for their programming, particularly in comparison with the landscape only a few years ago. This way, on top of the two major programming frameworks, compute unified device architecture (CUDA) and OpenCL, libraries (e.g., cuFFT) and high level interfaces (e.g., Thrust) have been developed that allow a fast access to the computing power of GPUs and other accelerators without detailed knowledge of the underlying hardware. Annotation-based programming models (e.g., PGI Accelerator), GPU plug-ins for existing mathematical software (e.g., Jacket in Matlab), GPU scripting languages (e.g., PyOpenCL), and new data parallel languages (e.g., Copperhead) are also helping to bring the programming of hardware accelerators to a new level.Altogether, the advances both in the hardware and in the programmability of accelerators, coupled with their potentially appealing performance/power ratio for a wide range of applications, have pushed organizations to invest in heterogeneous systems that include accelerators, and have motivated researchers to port their algorithms to such systems and develop novel tools to facilitate their usage.