In most forecasting contexts, each target event has a resolution time point at which the “ground truth” is revealed or determined. It is reasonable to expect that as time passes, and information relevant to the event resolution accrues, the accuracy of individual forecasts will improve. For example, we expect forecasts about stock prices on a given date to be more accurate as that date approaches, or forecasts about sport tournament winners to become more accurate as the tournament progresses. This time dependence presents several issues for extracting the wisdom of crowds, and for optimizing differential weights when members of the crowd forecast the same event at different times. In this chapter, we discuss the challenges associated with this time dependence and survey the quality of the various solutions in terms of collective accuracy. To illustrate, we use data from the Hybrid Forecasting competition, where volunteer non-professional forecasters predicted multiple geopolitical events with time horizons of several weeks or months, as well as data from the European Central Bank’s Survey of Professional Forecasters which includes only a few select macroeconomic indices, but much longer time horizons (in some cases, several years). We address the problem of forecaster assessment, by showing how model-based methods may be used as an alternative to proper scoring rules for evaluating the accuracy of individual forecasters; how information aggregation can weigh concerns of forecast recency as well as sufficient crowd size; and explore the relationship between crowd size, forecast timing and aggregate accuracy. We also provide recommendations both for managers seeking to select the best analysts from the crowd, as well as aggregators looking to make the most of the overall crowd wisdom.