Background: The integration of large language models (LLMs) such as GPT-4 into healthcare presents potential benefits and challenges. While LLMs have shown promise in applications ranging from scientific writing to personalized medicine, their practical utility and safety in clinical settings remain under scrutiny. Concerns about accuracy, ethical considerations and bias necessitate rigorous evaluation of these technologies against established medical standards. Objective: To compare the completeness, necessity, dosage accuracy and overall safety of type 2 diabetes management plans created by GPT-4 with those devised by medical experts. Methods: This study involved a comparative analysis using anonymized patient records from a healthcare setting in West Bengal, India. Management plans for 50 Type 2 diabetes patients were generated by GPT-4 and three blinded medical experts. These plans were evaluated against a reference management plan based on American Diabetes Society guidelines. Completeness, necessity and dosage accuracy were quantified and an error score was devised to assess the quality of the generated management plans. The safety of the management plans generated by GPT-4 was also assessed. Results: Results indicated that medical experts' management plans had fewer missing medications compared to those generated by GPT-4 (p=0.008). However, GPT-4 generated management plans included fewer unnecessary medications (p=0.003). No significant difference was observed in the accuracy of drug dosages (p=0.975). The overall error scores were comparable between human experts and GPT-4 (p=0.301). Safety issues were noted in 16% of the plans generated by GPT-4, highlighting potential risks associated with AI-generated management plans. Conclusion: The study demonstrates that while GPT-4 can effectively reduce unnecessary drug prescriptions, it does not yet match the performance of medical experts in terms of plan completeness and safety. The findings support the use of LLMs as supplementary tools in healthcare, underscoring the need for enhanced algorithms and continuous human oversight to ensure the efficacy and safety of AI applications in clinical settings. Further research is necessary to improve the integration of LLMs into complex healthcare environments.