Artificial intelligence (AI) holds great promise in healthcare and medicine on being able to help with many aspects, from biological scientific discovery, to clinical patient care, to public health policy making. However, the potential risk of AI methods for generating factually incorrect or unfaithful information is a big concern, which could result in serious consequences. This review aims to provide a comprehensive overview of the faithfulness problem in existing research on AI in healthcare and medicine, including analysis of the cause of unfaithful results, evaluation metrics, and mitigation methods. We will systematically review the recent progress in optimizing the factuality in various generative medical AI methods, including knowledge grounded large language models, text-to-text generation tasks such as medical text summarization, medical text simplification, multimodality-to-text generation tasks such as radiology report generation, and automatic medical fact-checking. The challenges and limitations of ensuring the faithfulness of AI-generated information in these applications, as well as forthcoming opportunities will be discussed. We expect this review to help researchers and practitioners understand the faithfulness problem in AI-generated information in healthcare and medicine, as well as the recent progress and challenges on related research. Our review can also serve as a guide for researchers and practitioners who are interested in applying AI in medicine and healthcare.