In simulation-based validation, the detection of design errors requires both stimulus capable of activating the errors and checkers capable of detecting the behavior as erroneous. Validation coverage metrics tend to address only the sufficiency of a testbench's stimulus component, whereas fault insertion techniques focus on the testbench's checker component. In this paper we introduce "coverage discounting", an analytical technique that combines the benefits of each approach, overcomes their respective shortcomings, and provides significantly more information than performing both tasks separately. The proposed approach can be used with any functional coverage metric (including, and ideally, user defined covergroups and bins), and a variety of fault models and insertion mechanisms. We present an experimental case study where the proposed approach is used to evaluate functional and pseudofunctional tests for a microprocessor. The simulation efficiency is improved through the use of an instruction set simulator, which has been instrumented to record functional coverage information as well as insert faults according to an ad-hoc fault model. The results demonstrate the benefits of coverage discounting: it is able to correctly distinguish high-and low-quality tests with similar coverage scores as well as expose checker insufficiencies.