Finite state machines (FSMs) are the backbone of many applications, but are difficult to parallelize due to their inherent dependencies. Speculative FSM parallelization has shown promise on multicore machines with up to eight cores. However, as hardware parallelism grows (e.g., Xeon Phi has up to 288 logical cores), a fundamental question raises: How does the speculative FSM parallelization scale as the number of cores increases? Without answering this question, existing methods for speculative FSM parallelization simply choose to use all available cores, which might not only waste computing resources, but also result in suboptimal performance. In this work, we conduct a systematic scalability analysis for speculative FSM parallelization. Unlike many other parallelizations which can be modeled by the classic Amdahl's law or its simple extensions, speculative FSM parallelization scales unconventionally due to the non-deterministic nature of speculation and the cost variations of misspeculation. To address these challenges, this work introduces a spectrum of scalability models that are customized to the properties of specific FSMs and the underlying architecture. The models, for the first time, precisely capture the scalability of speculative parallelization for different FSM computations, and clearly show the existence of a "sweet spot" in terms of the number of cores employed by the speculative FSM parallelization to achieve the optimal performance. To make the scalability models practical, we develop S3, a scalability-sensitive speculation framework for FSM parallelization. For any given FSM, S3 can automatically characterize its properties and analyze its scalability, hence guide speculative parallelization towards the optimal performance and more efficient use of computing resources. Evaluations on different FSMs and architectures confirm the accuracy of the proposed models and show that S3 achieves significant speedup (up to 5X) and energy savings (up to 77%). CCS CONCEPTS • Computing methodologies → Parallel algorithms; • Computer systems organization → Parallel architectures;