Background: Generative artificial intelligence (AI) tools like ChatGPT have emerged as potentially valuable technologies to augment human expertise in healthcare. However, uncertainties remain regarding appropriate clinical applications and limitations. This review synthesizes current evidence on using generative AI for clinical decision support, patient data processing, and medical education. Methods: A systematic search of Web of Science, Scopus, and ProQuest databases identified 33 relevant studies published in 2023 examining ChatGPT for healthcare uses. Two reviewers extracted data on study characteristics, AI system details, key results, and authors' conclusions. Evidence was synthesized qualitatively using a comparative analysis approach. Results: Supervised use of ChatGPT-generated simulations appeared beneficial for clinical training, but oversight was critical. Numerous studies found risks in relying on ChatGPT's clinical suggestions given frequent factual errors, outdated recommendations, and inappropriate advice. However, ChatGPT demonstrated potential for enhancing workflows via medical documentation automation. Conclusions: While showing promise for constrained uses like supervised education and documentation, findings caution against open-ended ChatGPT integration in clinical practice currently. Additional large-scale comparative effectiveness research is imperative to establish evidence-based implementation guidance. Responsible translation requires governance, validation against literature, and focus on human-AI collaboration versus replacement. Further inquiry can illuminate best practices for balancing innovation and safety.