This report presents the results of a friendly competition for formal verification and policy synthesis of stochastic models. It also introduces new benchmarks and their properties within this category and recommends next steps for this category towards next yearβs edition of the competition. In comparison with tools on non-probabilistic models, the tools for stochastic models are at the early stages of development that do not allow full competition on a standard set of benchmarks. We report on an initiative to collect a set of minimal benchmarks that all such tools can run, thus facilitating the comparison between efficiency of the implemented techniques. The friendly competition took place as part of the workshop Applied Verification for Continuous and Hybrid Systems (ARCH) in Summer 2022.