2019 IEEE 5th International Forum on Research and Technology for Society and Industry (RTSI) 2019
DOI: 10.1109/rtsi.2019.8895567
|View full text |Cite
|
Sign up to set email alerts
|

Fog Acceleration through Reconfigurable Devices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
3

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 9 publications
0
3
0
Order By: Relevance
“…Nonetheless, a further increment in the number of boards of the distributed system would lead to the saturation of the fully connected block, that cannot support more than 720 chunks/s. With three replicas of the convolutional block, the throughput gain goes down to 10% with respect to three separate PYNQs in parallel performing the computations of the full BNN (the respective [13] framework, an event-based system that we modified to support jobs and pipelines. In particular, a job has been implemented as a task accepting data from an input queue that is killed after all the data has been processed.…”
Section: ) Resource Utilizationmentioning
confidence: 99%
“…Nonetheless, a further increment in the number of boards of the distributed system would lead to the saturation of the fully connected block, that cannot support more than 720 chunks/s. With three replicas of the convolutional block, the throughput gain goes down to 10% with respect to three separate PYNQs in parallel performing the computations of the full BNN (the respective [13] framework, an event-based system that we modified to support jobs and pipelines. In particular, a job has been implemented as a task accepting data from an input queue that is killed after all the data has been processed.…”
Section: ) Resource Utilizationmentioning
confidence: 99%
“…Fog Acceleration through Reconfigurable Devices (FARD) [4] is a fog computing distributed system designed to allow seamless cooperation across heterogeneous fog computing nodes. Inside FARD, two different aspects are coexisting: hardware acceleration and distributed run-time management.…”
Section: B Fardmentioning
confidence: 99%
“…In this work, we propose an analysis of the hardware resource usage while modifying the CNN splitting point and we describe how FARD [4], a framework to implement fog computing distributed system accelerators, is modified to deal with this kind of applications. This is a continuation of our previous work BNNsplit [5], where we explored splitting strategies for Xilinx FINN BNNs [6].…”
Section: Introductionmentioning
confidence: 99%