In psychology, we often want to know whether or not an effect exists. The traditional way of answering this question is to use frequentist statistics. However, a significance test against a null hypothesis of no effect cannot distinguish between two states of affairs: evidence of absence of an effect, and absence of evidence for or against an effect. Bayes factors can make this distinction; however, uptake of Bayes factors in psychology has so far been low for two reasons. Firstly, they require researchers to specify the range of effect sizes their theory predicts. Researchers are often unsure about how to do this, leading to the use of inappropriate default values which may give misleading results. Secondly, many implementations of Bayes factors have a substantial technical learning curve. We present a case study and simulations demonstrating a simple method for generating a range of plausible effect sizes based on the output from frequentist mixed-effects models. Bayes factors calculated using these estimates provide intuitively reasonable results across a range of real effect sizes. The approach provides a solution to the problem of how to come up with principled estimates of effect size, and produces comparable results to a state-of-the-art method without requiring researchers to learn novel statistical software.