In the formal verification of stochastic systems, statistical model checking uses simulation to overcome the state space explosion problem of probabilistic model checking. Yet its runtime explodes when faced with rare events, unless a rare event simulation method like importance splitting is used. The effectiveness of importance splitting hinges on nontrivial model-specific inputs: an importance function with matching splitting thresholds. This prevents its use by nonexperts for general classes of models. In this paper, we present an automated method to derive the importance function. It considers both the structure of the model and of the formula characterising the rare event. It is memory-efficient by exploiting the compositional nature of formal models. We experimentally evaluate it in various combinations with two approaches to threshold selection as well as different splitting techniques for steady-state and transient properties. We find that Restart splitting combined with thresholds determined via a new expected success method most reliably succeeds and performs very well for transient properties. It remains competitive in the steady-state case, which is however challenging to all combinations we consider. All methods are implemented in the modes tool of the Modest Toolset and in the Fig rare event simulator.state space explosion problem limits this approach to small models. For other models, in particular those involving events governed by general continuous probability distributions, model checking techniques only exist for specific subclasses with limited scalability [55] or merely compute probability bounds [31].Statistical model checking (SMC [38,72]), i.e. using Monte Carlo simulation with formal models, has become a popular alternative for large models, and for formalisms not amenable to (traditional) probabilistic model checking like stochastic (timed) automata [9,18]. SMC trades memory for runtime: memory usage is constant, but the number of simulation runs which are needed to converge to a result can easily explode with the desired precision. This is exacerbated in the presence of rare events. For instance, when the true probability of an event is 10 −15 , one may want that the error of an estimation is no larger than 10 −16 . Such tight requirements in the precision of estimations may render traditional Monte Carlo simulation approaches infeasible [24,65].Rare event simulation methods (RES [58]) have been developed to attack this problem. They increase the number of simulation runs that reach the rare event and adjust the statistical evaluation accordingly. Broadly speaking, the main RES methods are importance sampling and importance splitting. They complement each other in several application domains [57]. The former modifies the probability distributions which dictate the stochastic behaviour of the model, with the aim to make the event more likely to occur. The challenge lies in finding a "good" change of measure to modify probabilities in an effective way. Importance splitting instead does n...