ImportanceAlthough artificial intelligence (AI) offers many promises across modern medicine, it may carry a significant risk for the mass generation of targeted health disinformation. This poses an urgent threat toward public health initiatives and calls for rapid attention by health care professionals, AI developers, and regulators to ensure public safety.ObservationsAs an example, using a single publicly available large-language model, within 65 minutes, 102 distinct blog articles were generated that contained more than 17 000 words of disinformation related to vaccines and vaping. Each post was coercive and targeted at diverse societal groups, including young adults, young parents, older persons, pregnant people, and those with chronic health conditions. The blogs included fake patient and clinician testimonials and obeyed prompting for the inclusion of scientific-looking referencing. Additional generative AI tools created an accompanying 20 realistic images in less than 2 minutes. This process was undertaken by health care professionals and researchers with no specialized knowledge in bypassing AI guardrails, relying solely on publicly available information.Conclusions and RelevanceThese observations demonstrate that when the guardrails of AI tools are insufficient, the ability to rapidly generate diverse and large amounts of convincing disinformation is profound. Beyond providing 2 example scenarios, these findings demonstrate an urgent need for robust AI vigilance. The AI tools are rapidly progressing; alongside these advancements, emergent risks are becoming increasingly apparent. Key pillars of pharmacovigilance—including transparency, surveillance, and regulation—may serve as valuable examples for managing these risks and safeguarding public health.