2021
DOI: 10.1109/tc.2021.3086106
|View full text |Cite
|
Sign up to set email alerts
|

OmpSs@FPGA framework for high performance FPGA computing

Abstract: This paper presents the new features of the OmpSs@FPGA framework. OmpSs is a data-flow programming model that supports task nesting and dependencies to target asynchronous parallelism and heterogeneity. OmpSs@FPGA is the extension of the programming model addressed specifically to FPGAs. OmpSs environment is built on top of Mercurium source to source compiler and Nanos++ runtime system. To address FPGA specifics Mercurium compiler implements several FPGA related features as local variable caching, wide memory … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 15 publications
(17 citation statements)
references
References 16 publications
0
17
0
Order By: Relevance
“…In addition, these programming models allow the use of parallelism based on tasking (instead of kernel invocations) and can even provide support for data-dependent tasks and manage the execution based on such data dependencies [66]. We plan to move part of the runtime work to a fast hardware task scheduler [66], [40] to further enhance the performance of these programming models. It has already been shown [40] that these improvements can lead to obtain top performance of some applications in FPGAs directly from high-level language programs.…”
Section: Programming Models and Toolchainsmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, these programming models allow the use of parallelism based on tasking (instead of kernel invocations) and can even provide support for data-dependent tasks and manage the execution based on such data dependencies [66]. We plan to move part of the runtime work to a fast hardware task scheduler [66], [40] to further enhance the performance of these programming models. It has already been shown [40] that these improvements can lead to obtain top performance of some applications in FPGAs directly from high-level language programs.…”
Section: Programming Models and Toolchainsmentioning
confidence: 99%
“…We plan to move part of the runtime work to a fast hardware task scheduler [66], [40] to further enhance the performance of these programming models. It has already been shown [40] that these improvements can lead to obtain top performance of some applications in FPGAs directly from high-level language programs. In addition, we plan to extend the range of programs that demonstrate this optimum results with the tools used by further adapting the environment to the TEXTAROSSA platforms.…”
Section: Programming Models and Toolchainsmentioning
confidence: 99%
“…The OmpSs@FPGA framework [6] is the extension of OmpSs that enables executing FPGA tasks on heterogeneous CPU+FPGA-based systems. It uses FPGA-specific vendor tools to automate the generation of the FPGA bitstream from the original user source code written in C/C++.…”
Section: Ompss@fpgamentioning
confidence: 99%
“…In [10], Sano et al implement a full custom FPGA design that solves the N-body problem on a single Intel Arria10 FPGA, achieving 10.944 Gpairs/s. In [6], de Haro et al uses OmpSs@FPGA to execute the N-body on a Xilinx Alveo U200 board reaching 37.62 Gpairs/s with a performance per watt of 0.58. Del Sozzo et al [11] also presented a custom N-body implementation on a Xilinx Virtex Ultrascale+ board (VU9P).…”
Section: Related Workmentioning
confidence: 99%
“…In this paper, we present a new approach to accelerate the computation of the SpMV operation on FPGAs, specially, but not exclusively, using HBM [5], [6]. We define a new, FPGA-friendly, sparse matrix encoding format (b8c -block-8compress) and its corresponding SpMV implementation using OmpSs@FPGA [7], a directive-based programming model that resembles OpenMP tasking [8], [9] originally based on OmpSs [10]. Our implementation targets the general case of SpMV (y = α • A × x + β • y) and does not restrict the size of the source or destination vectors and makes no assumption about the sparsity pattern of the matrix.…”
Section: Introductionmentioning
confidence: 99%