BackgroundArtificial Intelligence (AI) has potential to transform healthcare including the field of infectious diseases diagnostics. This study assesses the capability of three large language models (LLMs), GPT 4, Llama 3, and Gemini 1.5 to generate differential diagnoses, comparing their outputs against those of medical experts to evaluate AI’s potential in augmenting clinical decision-making.MethodsThis study evaluates the differential diagnosis capabilities of three LLMs, GPT 4, Llama 3, and Gemini 1.5, using 50 simulated infectious disease cases. The cases were diverse, complex, and reflective of common clinical scenarios, including detailed histories, symptoms, lab results, and imaging findings. Each model received standardized case information and produced differential diagnoses, which were then compared to reference differential diagnosis lists created by medical experts. The analysis utilized the Jaccard index and Kendall’s Tau to assess similarity and order accuracy, summarizing findings with mean, standard deviation, and combined p-values.ResultsThe mean numbers of differential diagnoses generated by GPT 4, Llama 3, and Gemini 1.5 were 6.22, 5.06, and 10.02 respectively which was significantly different (p<0.001) from the medical experts. The mean Jac-card index of GPT 4, Llama 3, and Gemini 1.5 were 0.3, 0.21, and 0.24 while the mean Kendall’s Tau were 0.4, 0.7, and 0.33 respectively. The combined p-value of GPT 4, Llama 3, and Gemini 1.5 were 1, 1, 0.979 respectively indicating no significant association between the differential diagnosis generated by the LLMs and the medical experts.ConclusionAlthough LLMs like GPT 4, Llama 3, and Gemini 1.5 exhibit varying effectiveness, none align significantly with expert-level diagnostic accuracy, emphasizing the need for further development and refinement. The findings highlight the importance of rigorous validation, ethical considerations, and seamless integration into clinical workflows to ensure AI tools enhance healthcare delivery and patient outcomes effectively.