It is well-known that abstractive summaries are subject to hallucination-including material that is not supported by the original text. While summaries can be made hallucinationfree by limiting them to general phrases, such summaries would fail to be very informative. Alternatively, one can try to avoid hallucinations by verifying that any specific entities in the summary appear in the original text in a similar context. This is the approach taken by our system, HERMAN. The system learns to recognize and verify quantity entities (dates, numbers, sums of money, etc.) in a beamworth of abstractive summaries produced by state-of-the-art models, in order to up-rank those summaries whose quantity terms are supported by the original text. Experimental results demonstrate that the ROUGE scores of such up-ranked summaries have a higher Precision than summaries that have not been upranked, without a comparable loss in Recall, resulting in higher F 1 . Preliminary human evaluation of up-ranked vs. original summaries shows people's preference for the former.