Forecasting software, through the incorporation of automatic features for model selection and estimation, has made heretofore "complex" methods more accessible to the practitioner, giving rise concomitantly to claims that software now relieves the practitioner of the burden of technical knowledge. Academics, however, have questioned the wisdom of forecasting without foretraining. This paper presents a survey and evaluation of automatic forecasting, based on the features of thirteen forecasting packages which perform single-equation methods on time series data. Our goals are (a) to clarify for the practitioner the virtues and limitations of automatic forecasting, and (b) to assess whether the software encourages if not nurtures good forecasting practice in the identification, evaluation, and defense of a forecasting model. Our principal conclusions: forecasting software can provide substantial and reliable assistance to the practitioner in the selection of appropriate specifications for extrapolative models. With regard to important tasks involving the evaluation and presentation of forecasts, as well as for the determination of whether the introduction of causal variables is worthwhile, the practitioner is left largely to his own devices, expertise, and judgment. The serious danger to the untrained practitioner is the "closedworld" problem of knowing what you don't know.