Predictions from simulations have entered the mainstream of public policy and decision-making practices. Unfortunately, methods for gaining insight into faulty simulations outputs have not kept pace. Ideally, an insight gathering method would automatically identify the cause of a faulty output and explain to the simulation developer how to correct it. In the field of software engineering, this challenge has been addressed for general-purpose software through statistical debuggers. We present two research contributions, elastic predicates and many-valued labeling functions, that enable debuggers designed for general-purpose software to become more effective for simulations employing random variates and continuous numbers. Elastic predicates address deficiencies of existing debuggers related to continuous numbers, whereas manyvalued labeling functions support the use of random variates. When used in combinations, these contributions allow a simulation developer tasked with localizing the program statement causing the faulty simulation output to examine 40% fewer statements than the leading alternatives. Our evaluation shows that elastic predicates and many-valued labeling functions maintain their ability to reduce the number of program statements that need to be examined under the imperfect conditions that developers experience in practice.