Scoring rules condense all information regarding the performance of a probabilistic forecast into a single numerical value, providing a convenient framework with which to rank and compare competing prediction schemes objectively. Although scoring rules provide only a single measure of forecast accuracy, the expected score can be decomposed into components that each assess a distinct aspect of the forecast, such as its calibration or information content. Since these components could depend on several factors, it is useful to evaluate forecast performance under different circumstances; if a forecaster were able to identify situations in which their forecasts perform particularly poorly, then they could more easily develop their forecast strategy to account for these deficiencies. To help forecasters identify such situations, a novel decomposition of scores is introduced that quantifies conditional forecast biases, allowing for a more detailed examination of the sources of information in the forecast. From this, we claim that decompositions of proper scores provide a broad generalisation of the well‐known analysis of variance (ANOVA) framework. The new decomposition is applied to the Brier score, which is then used to evaluate forecasts that the daily maximum temperature will exceed a range of thresholds, issued by the Swiss Federal Office of Meteorology and Climatology (MeteoSwiss). We demonstrate how the additional information provided by this decomposition can be used to improve the performance of these forecasts, by identifying appropriate auxiliary information to include within statistical postprocessing methods.