Parallel computing techniques have been supported by programming languages in two major ways: either library-based APIs or extended language constructs. Library-based features are portable and offer fine-grained control on parallelization details. However, they rely on individual programmer skills; thus, they may lead to inconsistent implementations and considerable code restructurings. On the contrary, language constructs promote environments that largely conceal the details of parallel programming techniques. However, they normally reduce programmer control over the granularity of parallelization and impose additional development concepts and compilation requirements that may sacrifice ease of use and portability. Therefore, approaches that balance between programmer control on parallelization details, intuitiveness of concepts, and portability can gain priority over other paradigms. In this paper, we discuss @PT (Annotation Parallel Task), a parallel computing framework that proposes Java annotations, standard Java components, as its language constructs. @PT takes an object-oriented approach on efficient execution and management of asynchronous tasks, with a special focus on GUI-responsive applications.This paper presents the annotation-based programming interface of the framework and its fundamental parallelization concepts. Furthermore, it studies the usability and performance of @PT by comparisons with other Java parallelization approaches in a set of standard benchmarks. The observations suggest that @PT maintains a simple programming interface, whereas it performs efficiently in different parallel computing domains.
INTRODUCTIONThe proliferation of multi-core systems has encouraged support for parallel computing in shared-memory systems. The support is often offered in the form of libraries (eg, TPL and PLINQ in C# 1 ) or language constructs (eg, OpenMP 2 ). The majority of library-based approaches provide APIs that cover a wide range of fine-grained functions that can be combined to form flexible and efficient implementations for different parallel computing domains. Assembling and combining these functions for building a working system remains the responsibility of individual programmers. Here, the quality of software design depends on the programming approaches that are practiced by individuals. Reliance on the technical abilities and coding styles of individuals leads to inconsistent implementation and performance standards, some of which can be inefficient and complex to modify.In order to overcome the risks mentioned above, approaches such as Microsoft, PPL, 3 Java Streams, 4 SKePU, 5 and FastFlow 6 offer library APIs that implement frequent parallel processing patterns. Examples of these patterns are: map, reduce, farm, pipeline, asynchronous tasks, and so on. 5The pre-built components of these APIs improve consistency and efficiency across different implementations. In addition, these patterns can be composed to form bigger parallel applications. However, one needs to know what pattern to choose f...