Temporal noise-reduction filtering of image sequences is commonly applied in medical imaging and other applications, and a common assessment technique is to measure the reduction in display noise variance. Theoretically and experimentally, we demonstrate that this is inadequate because it does not account for the interaction with the human observer. Using a new forced-choice method, we compare detectability of low-contrast objects and find a noise level for an unfiltered sequence that gives the same detectability as the filtered sequence. We report the equivalent detectability noise variance ratio, or EDVR. For a digital low-pass filter that reduces the bandwidth by 1/2, display noise reduction predicts an EDVR of 0.5. The measured value averaged over three subjects, 0.93Ϯ0.19, compares favorably with the 0.85 predicted from a theoretical human observer model, and both are very close to the value of 1.0 expected for no filtering. Hence, the effective, perceived noise is relatively unchanged by temporal low-pass filtering. The computational observer model successfully evaluates a simple low-pass temporal filter, and we anticipate that it can be used to predict the observer response to other image enhancement filters. © 1996 SPIE and IS&T.