2020
DOI: 10.1016/j.swevo.2020.100720
|View full text |Cite
|
Sign up to set email alerts
|

A comparative study of high-productivity high-performance programming languages for parallel metaheuristics

Abstract: Parallel metaheuristics require programming languages that provide both, high performance and a high level of programmability. This paper aims at providing a useful data point to help practitioners gauge the difficult question of whether to invest time and effort into learning and using a new programming language. To accomplish this objective, three productivity-aware languages (Chapel, Julia, and Python) are compared in terms of performance, scalability and productivity. To the best of our knowledge, this is … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 24 publications
(10 citation statements)
references
References 30 publications
0
10
0
Order By: Relevance
“…), which indeed are quite popular in the remote sensing community, are generally not recommended for heavy numerical computation. This is for two main reasons: First, they generally do not allow the programmer to explicitly specify the desired parallelism scheme because of a general lack in specific support for efficient parallel computing; second, they are significantly slower in terms of execution time, thus in some cases suffering from severe performance penalties [45].…”
Section: Openmp-based Parallel Implementationmentioning
confidence: 99%
“…), which indeed are quite popular in the remote sensing community, are generally not recommended for heavy numerical computation. This is for two main reasons: First, they generally do not allow the programmer to explicitly specify the desired parallelism scheme because of a general lack in specific support for efficient parallel computing; second, they are significantly slower in terms of execution time, thus in some cases suffering from severe performance penalties [45].…”
Section: Openmp-based Parallel Implementationmentioning
confidence: 99%
“…This is a very useful property for a parallel implementation, requiring only N − 1 synchronizations to process O(KN 2 (1 + M)) operations. Hence, a parallel implementation of Algorithm 5 is straightforward in a shared memory parallelization, using OpenMP for instance in C/C++, or higher-level programming languages such as Python, Julia or Chapel [60]. One may also consider an intensive parallelization in a many-core environment, such as General Purpose Graphical Processing Units (GPGPU).…”
Section: Towards a Parallel Implementationmentioning
confidence: 99%
“…For the tests carried out, firstly, a base version of the Rust algorithm was used and the impact of applying each optimization incrementally was analyzed 9 . Then, the best version of Rust was selected to compare with its C equivalent.…”
Section: Experimental Designmentioning
confidence: 99%
“…This is why, in recent years, many programming languages with a high level of abstraction 2 have tried to add support for concurrency and parallelism, in an attempt to compete with C and Fortran. Among these, we can mention Java [6,7] and Python [8,9]; however, unfortunately, neither has been successful at becoming an alternative in the HPC community for the time being.…”
Section: Introductionmentioning
confidence: 99%