We consider our paper's artifact to be the benchmarks we used in the paper, as well as the results we got by running BoostFSM to enable scalable FSM parallelization.We have provided a zip file about the simplified version of our implementations, for download and evaluation, but we need to use a KNL architecture with 64 cores for performance measuremetn, so reviewers are also encouraged to contact us for remote access.In this artifact, we will just prove part of the results shown in the paper (because we want to keep the total evaluation time under 4 hours and our framework evolves over time). This hopefully suffices to validate the claims made in the paper. For any bugs, comments, or feedback, please do not hesitate to contact us.