Proceedings of the 2017 ACM International Conference on Management of Data 2017
DOI: 10.1145/3035918.3035954
|View full text |Cite
|
Sign up to set email alerts
|

Accelerating Pattern Matching Queries in Hybrid CPU-FPGA Architectures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
36
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 76 publications
(36 citation statements)
references
References 27 publications
0
36
0
Order By: Relevance
“…Sidler et al [53] have proposed an FPGA solution for accelerating database pattern matching queries, the proposed solution reduces query response time by 70%. Similarly, Kara et al [27], demonstrated how offloading the partitioning operation of the SQL join operator to the FPGA can significantly improve performance and offer a robust solution.…”
Section: Low-latency Data Processing Pipelinesmentioning
confidence: 99%
“…Sidler et al [53] have proposed an FPGA solution for accelerating database pattern matching queries, the proposed solution reduces query response time by 70%. Similarly, Kara et al [27], demonstrated how offloading the partitioning operation of the SQL join operator to the FPGA can significantly improve performance and offer a robust solution.…”
Section: Low-latency Data Processing Pipelinesmentioning
confidence: 99%
“…These operators might belong to different categories of database operators such as selections [85,107,124,125,139], projections [85,107,124,126], aggregations (sum, max, min, etc.) [31,84,86,97], and regular expression matching [113,131]. We place them together because they typically act as preprocessing or post-processing in most of the queries and have similar memory access patterns.…”
Section: Streaming Operatorsmentioning
confidence: 99%
“…This is possible because the ACCORDA accelerator is fast, small, and low-power so that a single accelerator is sufficient to support across many CPU cores (see Section 5), and still delivers high speedups (evaluated in Section 7.3). Most other hardware acceleration approaches are forced into looser integration [18,35,45,50] by power, and wind up with two worker types: accelerated and normal. Such an approach complicates scheduling, forcing query execution to switch between workers to exploit acceleration.…”
Section: Uniform Runtime Worker Modelmentioning
confidence: 99%