Forecasts of future outcomes, such as the consequences of climate change, are given with different degrees of precision. Logically, more precise forecasts (e.g., a temperature increase of 3-4°) have a smaller probability of capturing the actual outcome than less precise forecasts (e.g., a temperature increase of 2-6°). Nevertheless, people often trust precise forecasts more than vague forecasts, perhaps because precision is associated with knowledge and expertise. In five experiments, we ask whether people expect highly confident forecasts to be associated with wider or narrower outcome ranges than less confident forecasts (Experiments 1, 2, and 5), and, conversely, whether they expect precise forecasts to be issued with higher or lower confidence than vague forecasts (Experiments 3 and 4). The results revealed two distinct ways of thinking about confidence intervals, labeled distributional (wide intervals seen as more probable than narrow intervals) and associative (wide intervals seen as more uncertain than narrow intervals). Distributional responses occurred somewhat more often in within-subjects designs, where wide and narrow prediction intervals and high and low probability estimates can be directly compared, whereas separate evaluations (in between-subjects design) suggested associative responses to be slightly more frequent. These findings are relevant for experts communicating forecasts through confidence intervals.