2019
DOI: 10.14778/3357377.3357383
|View full text |Cite
|
Sign up to set email alerts
|

Lowering the latency of data processing pipelines through FPGA based hardware acceleration

Abstract: Web search engines often involve a complex pipeline of processing stages including computing, scoring, and ranking potential answers plus returning the sorted results. The latency of such pipelines can be improved by minimizing data movement, making stages faster, and merging stages. The throughput is determined by the stage with the smallest capacity and it can be improved by allocating enough parallel resources to each stage. In this paper we explore the possibility of employing hardware acceleration (an FPG… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
27
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 42 publications
(27 citation statements)
references
References 48 publications
0
27
0
Order By: Relevance
“…FPGA's offer a concept of parallelism through which multiple processes can be run at the same time instant. This feature reduces the computation time of design in FPGA or we can say that it reduces the latency of the design 44 Power consumption …”
Section: Resultsmentioning
confidence: 99%
“…FPGA's offer a concept of parallelism through which multiple processes can be run at the same time instant. This feature reduces the computation time of design in FPGA or we can say that it reduces the latency of the design 44 Power consumption …”
Section: Resultsmentioning
confidence: 99%
“…Results have shown that VIBNN can reduce energy consumption (C3) while attaining high throughput (C9) levels. Lastly, Owaida, M., et al [58] have explored an FPGAbased accelerator to improve the overall performance of data processing pipelines. The authors focus on decision tree ensemble methods, a common approach to score and classify search systems.…”
Section: E Hardware Accelerationmentioning
confidence: 99%
“…Scalability (C2) has been tackled in the revised literature, starting from different points of view. Novel architectures Mobility (C1) [37], [38], [40], [42], [45], [46], [49], [50], [69], [71], [84], [100], [103], [107], [117], [118], [119], [122], [128], [130], [135], [136], [170], [174], [183] 25 (20.8%) Scalability (C2) [43], [45], [46], [49], [48], [47], [52], [53], [54], [55], [56], [57], [58], [71], [73], [75], [78], [79], [80], [81], [85], [86], [87], [88] [89...…”
Section: Open Challenges and Future Directionsmentioning
confidence: 99%
“…NF applications are particularly diverse in nature with requirements spanning from high throughput to short latency requirements; effectively utilizing the heterogeneous computing resources is a key aspect in meeting these diverse NF demands. For instance, Owa et al [226] have proposed an FPGA based web search engine hardware acceleration framework, which implements the scoring function as a decision tree ensemble. A web search engine involves processing pipelined functions of computing, scoring, and ranking potential results.…”
Section: ) Cpu-fpgamentioning
confidence: 99%