SummaryCurrent parallel programming frameworks aid developers to a great extent in implementing applications that exploit parallel hardware resources. Nevertheless, developers require additional expertise to properly use and tune them to operate efficiently on specific parallel platforms. On the other hand, porting applications between different parallel programming models and platforms is not straightforward and demands considerable efforts and specific knowledge.Apart from that, the lack of high-level parallel pattern abstractions, in those frameworks, further increases the complexity in developing parallel applications. To pave the way in this direction, this paper proposes GRPPI, a generic and reusable parallel pattern interface for both stream processing and data-intensive C++ applications. GRPPI accommodates a layer between developers and existing parallel programming frameworks targeting multi-core processors, such as C++ threads, OpenMP and Intel TBB, and accelerators, as CUDA Thrust. Furthermore, thanks to its high-level C++ application programming interface and pattern composability features, GRPPI allows users to easily expose parallelism via standalone patterns or patterns compositions matching in sequential applications. We evaluate this interface using an image processing use case and demonstrate its benefits from the usability, flexibility, and performance points of view. Furthermore, we analyze the impact of using stream and data pattern compositions on CPUs, GPUs and heterogeneous configurations. An approach to relieve developers from this burden is the use of pattern-based parallel programming frameworks, such as SkePU, 2FastFlow 3 , or Intel TBB. 4 In this sense, design patterns provide a way to encapsulate (using a building blocks approach) algorithmic aspects, allowing users to implement more robust, readable, and portable solutions with such a high-level of abstraction. Basically, these patterns instantiate parallelism while hide away the complexity of concurrency mechanisms, eg, thread management, synchronizations, or data sharing. Examples of applications coming from multiple domains (eg, financial, medical, and mathematical) and improving their performance through parallel programming design patterns, can be widely found in the literature. [5][6][7] Nevertheless, although all these skeletons aim to simplify the development of parallel applications, there is not a unified standard. 8 Therefore, users require understanding different frameworks, not only to decide which fits best for their purposes, but also to properly use them. Not to mention the migration efforts of applications among frameworks, which becomes as well an arduous task.In order to mitigate this situation, this paper presents GRPPI, a generic and reusable high-level C++ parallel pattern interface that comprises both stream and data-parallel patterns. In general, the goal of
Summary The rapid progress of multi/many‐core architectures has caused data‐intensive parallel applications not yet fully optimized to deliver the best performance. In the advent of concurrent programming, frameworks offering structured patterns have alleviated developers' burden adapting such applications to multithreaded architectures. While some of these patterns are implemented using synchronization primitives, others avoid them by means of lock‐free data mechanisms. However, lock‐free programming is not straightforward, ensuring an appropriate use of their interfaces can be challenging, since different memory models plus instruction reordering at compiler/processor levels can interfere in the occurrence of data races. The benefits of race detectors are formidable in this sense; however, they may emit false positives if are unaware of the underlying lock‐free structure semantics. To mitigate this issue, this paper extends ThreadSanitizer, a race detection tool, with the semantics of 2 lock‐free data structures: the single‐producer/single‐consumer and the multiple‐producer/multiple‐consumer queues. With it, we are able to drop false positives and detect potential semantic violations. The experimental evaluation, using different queue implementations on a set of μ benchmarks and real applications, demonstrates that it is possible to reduce, on average, 60% the number of data race warnings and detect wrong uses of these structures.
Since the ‘free lunch’ of processor performance is over, parallelism has become the new trend in hardware and architecture design. However, parallel resources deployed in data centers are underused in many cases, given that sequential programming is still deeply rooted in current software development. To address this problem, new methodologies and techniques for parallel programming have been progressively developed. For instance, parallel frameworks, offering programming patterns, allow expressing concurrency in applications to better exploit parallel hardware. Nevertheless, a large portion of production software, from a broad range of scientific and industrial areas, is still developed sequentially. Considering that these software modules contain thousands, or even millions, of lines of code, an extremely large amount of effort is needed to identify parallel regions. To pave the way in this area, this paper presents Parallel Pattern Analyzer Tool, a software component that aids the discovery and annotation of parallel patterns in source codes. This tool simplifies the transformation of sequential source code to parallel. Specifically, we provide support for identifying Map, Farm, and Pipeline parallel patterns and evaluate the quality of the detection for a set of different C++ applications
En primer lugar, a mis padres. Os debo todo lo que soy y seré. Gracias por el cuidado y educación recibidos, por los buenos (y malos) momentos, por aquellos días de piscina en Playa de Madrid, por las barbacoas en la Fundación del Lesionado Medular/ASPAYM Madrid, por las comidas en "La Bodeguita" y en el Centro Municipal de Mayores Ascao. Gracias a vosotros aprendí a ser fuerte. A mamá, porque me enseñaste lo más importante, a sonreir a pesar de la adversidad, y otras cosas de menor importancia, como mantener un hogar. A papá, por enseñarme gran parte de lo que sé, el equilibrio, e iluminarme el camino; aunque fuera de forma accidental, sin tí, mi vocación sería otra. Nunca os lo podré agradecer tanto como lo merecisteis. D.E.P.En segundo lugar, a mis compañeros de trabajo: Manu, Javi Doc, y David. Por todos los momentos compartidos en el laboratorio y fuera de él; por ayudarme y presionarme para que trabajara. Sin vosotros, dudo que estuviera escribiendo estas líneas ahora. A mis compañeros de laboratorio: Guille, Mario, Javi Prieto, y José Cabrero, por las charlas y momentos de risa. A Alejandro Calderón, que siempre ha estado ahí para escucharme y echar una mano. A José Daniel, por darme la oportunadad de trabajar en el grupo y por estar ahí en los momentos difíciles que últimamente han acontecido. También a mis compañeros de trabajo del CERN: Enric, Stephan, Guilherme, Bertrand, Javi, Emmanouil "Manos", y especialmente a Axel, Jakob y Pere, por acogerme en ROOT.No voy a terminar sin mencionar a las personas que siempre estuvieron ahí (no, no me olvido de vosotros): a mi hermano, a Úrsula, a mi tía Paloma y mi tío Matías, a mi primo "el niño".
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.