Optimal decision-making under stochastic uncertainty is a core problem tackled in artificial intelligence/machine learning (AI), planning, and verification. Planning and AI methods aim to find good or optimal strategies to maximise rewards or the probability of reaching a goal. Verification approaches focus on calculating the probability or reward, obtaining the strategy as a side effect. In this paper, we connect three strands of work on obtaining strategies implemented in the context of the Modest Toolset: statistical model checking with either lightweight scheduler sampling or deep learning, and probabilistic model checking. We compare their different goals and abilities, and show newly extended experiments on Racetrack benchmarks that highlight the tradeoffs between the methods. We conclude with an outlook on improving the existing approaches and on generalisations to continuous models, and emphasise the need for further tool development to integrate methods that find, evaluate, compare, and explain strategies.