2013 International Conference on Reconfigurable Computing and FPGAs (ReConFig) 2013
DOI: 10.1109/reconfig.2013.6732296
|View full text |Cite
|
Sign up to set email alerts
|

FPGA<sup>2</sup>: An open source framework for FPGA-GPU PCIe communication

Abstract: In recent years two main platforms emerged as powerful key players in the domain of parallel computing: GPUs and FPGAs. Many researches investigate interaction and benefits of coupling them with a general purpose processor (CPU), but very few, and only very recently, integrate the two in the same computational system. Even less research are focusing on direct interaction of the two platforms [1]. This paper presents an open source framework enabling easy integration of GPU and FPGA resources; Our work provides… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 5 publications
0
9
0
Order By: Relevance
“…FPGA +GPU architecture [3] is the collaboration of FPGA and GPU. FPGA chip is suitable for dealing with task of pipeline and GPU is suitable for the task of high parallelism.…”
Section: Survey Of the Heterogeneous Computing Platform From Hardwarementioning
confidence: 99%
“…FPGA +GPU architecture [3] is the collaboration of FPGA and GPU. FPGA chip is suitable for dealing with task of pipeline and GPU is suitable for the task of high parallelism.…”
Section: Survey Of the Heterogeneous Computing Platform From Hardwarementioning
confidence: 99%
“…Whilst GPU's and FPGA's are often compared against each other as hardware accelerators and are rarely utilised in the same system, interest in the area of GPU-FPGA heterogeneous computing and its potential application is a growing field of research [21]. The proposed new hardware architecture is based around a physical backplane, which connects different hardware devices chosen to handle specific stages in the onboard data processing chain.…”
Section: B Proposed New Onboard Data Processing Systemmentioning
confidence: 99%
“…Design 1, as presented in [12], offered an input and an output FIFO to help the designer with the realization of streaming applications. It was conceived with a memory mapping allowing the central CPU to access the FIFOs through a range of addresses.…”
Section: Hardware Designmentioning
confidence: 99%
“…The most important blocks of the design 1 are described in [12]. Here we focus on the design 2, showing some differences with its ancestor:…”
Section: Hardware Designmentioning
confidence: 99%