2020
DOI: 10.1016/j.future.2019.11.040
|View full text |Cite
|
Sign up to set email alerts
|

A model for distributed in-network and near-edge computing with heterogeneous hardware

Abstract: Please refer to published version for the most recent bibliographic citation information. If a published version is known of, the repository item page linked to above, will contain details on accessing it.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0
3

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 21 publications
(14 citation statements)
references
References 43 publications
0
11
0
3
Order By: Relevance
“…Offloading Studies: This category of studies provide resources allocation or task offloading solutions in an edge computing environment that is augmented with in-network computing. While [58], [142], [143] tackle the resource allocation problem with an optimization approach, the study in [144] provides an architecture with the capability of task offloading to an edge processor enhanced with an FPGA accelerator. Ali et al [58] focus on use cases like open-air rock concerts and sports events at which constructing the wired network is not beneficial.…”
Section: B Edge Computingmentioning
confidence: 99%
“…Offloading Studies: This category of studies provide resources allocation or task offloading solutions in an edge computing environment that is augmented with in-network computing. While [58], [142], [143] tackle the resource allocation problem with an optimization approach, the study in [144] provides an architecture with the capability of task offloading to an edge processor enhanced with an FPGA accelerator. Ali et al [58] focus on use cases like open-air rock concerts and sports events at which constructing the wired network is not beneficial.…”
Section: B Edge Computingmentioning
confidence: 99%
“…The JIT complier can then perform dynamic kernel replication to efficiently exploit overlay resources. Finally, supporting runtime compilation on an embedded processor allows a lightweight edge accelerator node to compile unknown kernels without the need for a powerful server, enabling the emerging trend of in-network FPGA acceleration [43], [44].…”
Section: Background and Related Workmentioning
confidence: 99%
“…As these applications grow in scale, this centralized approach leads to bandwidth requirements and potentially impractical computational latencies. This has generated interest in computing wherein processing is in a distributed manner [8].…”
Section: Distributed Processing Systemsmentioning
confidence: 99%