Proceedings of the International Conference on Supercomputing 2017
DOI: 10.1145/3079079.3079082
|View full text |Cite
|
Sign up to set email alerts
|

Enabling scalability-sensitive speculative parallelization for FSM computations

Abstract: Finite state machines (FSMs) are the backbone of many applications, but are difficult to parallelize due to their inherent dependencies. Speculative FSM parallelization has shown promise on multicore machines with up to eight cores. However, as hardware parallelism grows (e.g., Xeon Phi has up to 288 logical cores), a fundamental question raises: How does the speculative FSM parallelization scale as the number of cores increases? Without answering this question, existing methods for speculative FSM paralleliza… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
9

Relationship

2
7

Authors

Journals

citations
Cited by 17 publications
(7 citation statements)
references
References 38 publications
0
7
0
Order By: Relevance
“…If the verification fails, the corresponding input partition has to be reprocessed with the correct starting state. Despite some optimization [53] for stopping the reprocessing earlier, the reprocessing cost, in general, can significantly compromise the speculation benefits [37].…”
Section: Fast Recovery From Misspeculationmentioning
confidence: 99%
“…If the verification fails, the corresponding input partition has to be reprocessed with the correct starting state. Despite some optimization [53] for stopping the reprocessing earlier, the reprocessing cost, in general, can significantly compromise the speculation benefits [37].…”
Section: Fast Recovery From Misspeculationmentioning
confidence: 99%
“…An implementation of this method [50] has been optimized for SIMD instructions. Speculative parallelization of FSMs has also been exploited [56,57,64,65] by breaking the transition dependences with state predictions. Though providing useful insights, the above work cannot be directly applied to the parallelization of semi-structured data processing, which essentially requires the use of stack-based automata.…”
Section: Related Workmentioning
confidence: 99%
“…Many previous research works [11,16,17,25,26] have demonstrated the effectiveness of state-convergence for finite state machine algorithms (FSM). According to their studies, most FSMs, even those with many states, often converge to 16 or less active states for any input.…”
Section: Reducing Number Of Sample Pointsmentioning
confidence: 99%
“…Zhao et al [25,26] parallelized a set of finite state machines (FSMs) with principled speculation. Qiu et al [17] observed that speculative FSM parallelization is not scalable with an increasing number of cores. To address the limitation, they presented a series of scalability analysis models and developed an automatic speculative FSM parallelization framework named S3.…”
Section: Related Workmentioning
confidence: 99%